## A Little Project Background

Recently I’ve been working on developing a computer vision system for a long standing customer. We are developing software that enables them to use computers to “see” where a particular object is space, and accurately determine its precise location with respect to the camera. From that information, they can do all kinds of useful things.

In order to figure out where something is in 3D space from a 2D image you have to perform what is commonly referred to as pose estimation. It’s a highly interesting problem by itself, but it’s not something I want to focus on in detail here. If you are interested in obtaining more information, you can Google pose estimation or PnP problems. There are, however, a couple of aspects of that problem that do pertain to this blog article. First, pose estimation is typically a nonlinear, iterative process. (Not all algorithms are iterative, but the ones I’m using are.) Second, like any algorithm, its output is dependent upon its input; namely, the accuracy of its pose estimate is dependent upon the accuracy of the upstream image processing techniques. Whatever error happens upstream of this algorithm typically gets magnified as the algorithm processes the input.

## The Problem I Wish to Solve

You might be wondering where we are going with HPC given all this talk about computer vision. It’s true that computer vision, especially image processing, is computationally intensive, but I’m not going to focus on that aspect. The problem I wanted to solve was this: Is there a particular kind of pattern that I can use as a target for the vision system such that the pose estimation is less sensitive to the input noise? In order to quantify “less sensitive” I needed to do some statistics. Statistics is almost-math, but just a hair shy. You can translate that statement as: My brain neither likes nor speaks statistics… (The probability of me not understanding statistical jargon is statistically significant. I took a p-test in a cup to figure that out…) At any rate, one thing that ALL statistics requires is a data set. A big data set. Making big data sets sounds like an HPC problem, and hence it was time to roll my own HPC.

## The Toolbox and the Solution

My problem reduced down to a classic Monte Carlo type simulation. This particular type of problem maps very nicely onto a parallel processing paradigm known as Map-Reduce. The concept is shown below:

The idea is pretty simple. You break the problem into chunks and you “Map” those chunks onto available processors. The processors do some work and then you “Reduce” the solution from each chunk into a single answer. This algorithm is recursive. That is, any single “Chunk” can itself become a new blue “Problem” that can be subdivided. As you can see, you can get explosive parallelism.

Now, there are tools that exist for this kind of thing. Hardoop is one such tool. I’m sure it is vastly superior to what I ended up using and implementing. However, I didn’t want to invest at this time in learning a specialized tool for this particular problem. I wanted to investigate a lower level tool on which this type of solution can be built. The tool I chose was node.js (www.nodejs.org).

I’m finding Node to be an awesome tool for hooking computers together in new and novel ways. It acts kind of like the post office in that you can send letters and messages and get letters and messages all while going about your normal day. It handles all of the coordinating and transporting. It basically sends out a helpful postman who taps you on the shoulder and says, “Hey, here’s a letter”. You are expected to do something (quickly) and maybe send back a letter to the original sender or someone else. More specifically, node turns everything that a computer can do into a “tap on the shoulder”, or an event. Things like: “Hey, go read this file for me.”, turns into, “OK. I’m happy to do that. I tell you what, I’ll tap you on the shoulder when I’m done. No need to wait for me.” So, now, instead of twiddling your thumbs while the computer spins up the harddrive, finds the file and reads it, you get to go do something else you need to do. As you can imagine, this is a really awesome way of doing things when stuff like network latency, hard drives spinning and little child processes that are doing useful work are all chewing up valuable time. Time that you could be using getting someone else started on some useful work. Also, like all children, these little helpful child processes that are doing real work never seem to take the same time to do the same task twice. However, simply being notified when they are done allows the coordinator to move on to other children. Think of a teacher in a class room. Everyone is doing work, but not at the same pace. Imagine if the teacher could only focus on one child at a time until that child fully finished. Nothing would ever get done!

Here is a little graph of our internal cluster at PADT cranking away on my Monte Carlo simulation.

It’s probably impossible to read the axes, but that’s 1200+ cores cranking away. Now, here is the real kicker. All of the machines have an instance of node running on them, but one machine is coordinating the whole thing. The CPU on the master node barely nudges above idle. That is, this computer can manage and distribute all this work by barely lifting a finger.

## Conclusion

There are a couple of things I want to draw your attention to as I wrap this up.

1. CUBE systems aren’t only useful for CAE simulation HPC! They can be used for a wide range of HPC needs.
2. PADT has a great deal of experience in software development both within the CAE ecosystem and outside of this ecosystem. This is one of the more enjoyable aspects of my job in particular.
3. Learning new things is a blast and can have benefit in other aspects of life. Thinking about how to structure a problem as a series of events rather than a sequential series of steps has been very enlightening. In more ways than one, it is also why this blog article exists. My Monte Carlo simulator is running right now. I’m waiting on it to finish. My natural tendency is to busy wait. That is, spin brain cycles watching the CPU graph or the status counter tick down. However, in the time I’ve taken to write this article, my simulator has proceeded in parallel to my effort by eight steps. Each step represents generating and reducing a sample of 500,000,000 pose estimates! That is over 4 billion pose estimates in a little under an hour. I’ve managed to write 1,167 words…

## Paragon Space Dev Helps Shatter Stratospheric Jump Record

Without hoopla or sponsorship from a major beverage company, Alan Eustace assembled a team of experts to shatter the record from jumping from the stratosphere.  Paragon Space Development, someone we can proudly call a PADT customer, formed the backbone of the team that made the StratEx project so successful. Using their experience with creating self-contained and safe human environments, and their general awesome engineering know-how, they developed a system that used just a space suite instead of a full capsule. This allowed the parachutist to leap from 135,908 ft!

View a video of the successful mission here: nyti.ms/1D7SmnT

Read about it here in the New York Times (Ignore their ignorance on commenting that Eustace could not hear the sonic boom… sigh… it’s a shockwave, not a boom!)

This project was about science and setting records, as well as proving out the technology for other uses. See where Paragon is going with this technology on the WorldView page- their answer to commercial space tourism that is practical and inspiring. worldviewexperience.com/

# WE ARE CELEBRATING MANUFACTURING IN ARIZONA

The state of Arizona has a vibrant and robust manufacturing community, something that most people do not know. To highlight this strong part of the state’s economy, the month of October has been designated as Manufacturing Month. Learn more at the ACA website.

PADT has been busy participating in a variety of events throughout the month of October.  We are excited to celebrate the culmination of this amazing month.

Everyone is welcome!

What:  Celebrating Arizona Manufacturing – The Special Closing Event of the 2014 Arizona Manufacturer’s Month

When: October 30th, 4-7pm

Where: PADT – 7755 S. Research Drive, Tempe, AZ 85284

Food and drinks will be provided.

In addition to networking and celebrating, several companies involved in Manufacturing will be in attendance for an exhibit focused on the future of manufacturing.

Exhibitors attending include:

…..and more

## See 3D Printed Art, Visit MATERIALIZE

We love art at PADT.  We especially love it when the tools we use, sell, and support for high-end engineering are used to create art. Last week we were able to participate in an event at the Shemer Art Center that did just that.  “MATERIALIZE: 3D Printing & Rapid Prototyping” is an exhibition that strives to educate artists and the public about new digital tools used to create art. The museum challenged artists to create original works using the capabilities of 3D printers.  PADT attended the opening on October 16th and showed off some of our own parts.

Here is a picture of PADT’s Mario Vargas explaining the technology behind 3D Printing to attendees:

The artwork created varied greatly, but all showed the power of 3D Printing to take ideas visualized on a computer, and convert them directly to physical parts. We highly recommend that anyone interested in art or 3D Printing, attend the exhibit which closes on November 27th, 2014.

Here is a very nice cow piece:

And this is a collection of images from the website:

## Continue a Workbench Analysis in ANSYS MAPDL R15

This article outlines the steps required to continue a partially solved Workbench based analysis using a Multi-Frame Restart and the MAPDL Batch mode.

• Some ways to interface between ANSYS Workbench and ANSYS MAPDL
• How to re-launch a run using a Multi-Frame Restart in ANSYS Batch mode
• The value of the jobname.abt functionality for Static Structural and Transient Structural analyses

Recently I was working in the ANSYS Workbench interface within the Mechanical application running a Transient Structural analysis. I began my run thinking that my workstation had the necessary resources to complete the analysis in a reasonable amount of time. As the analysis slowly progressed, I began to realize that I needed to make a change and switch to a computer that had more resources. But some of my analysis was already complete and I did not want to lose that progress. In addition, I wanted to be sure that I could monitor the analysis intermediately to ensure that it was advancing as I would like. This meant that however I decided to proceed I needed to make sure that I could still read my results back into Mechanical along with having the capability to restart again from a later point. Here were my options.

1: I could use the Remote Solve Manager (RSM) to continue running my analysis on a compute server machine. Check out this article for more on that.

I did use RSM in part but perhaps you do not have RSM configured or your computer resources are not connected through a network. Then I will show the other option you can use.

2: A Multi-Frame Restart using MADPL in ANSYS Batch mode

Here’s the process:

1. Make note of the current load step and last converged substep that your analysis completed when you hit the Interrupt Solution button

2. Copy the *.rdb, *.ldhi, *.Rnnn files from the Solver Files Directory on the local machine to the Working Directory on the computing machine

You can find your Solver Files Directory by right clicking on the Solution Branch in the Model Tree and selecting Open Solver Files Directory:

3. Write an MAPDL input file with the commands to launch a restart and save it in the Working Directory on the computing machine (save with extension *.inp)

Below is an example of an input that will work well for restarting an analysis, but feel free to adjust it with the understanding that the ANSYS Programming Design Language (APDL) is a sophisticated language with a vast array of capability.

4. Start the MADPL Product Launcher interface on the computing machine and:
a: Set Simulation Environment to ANSYS Batch
b. Navigate to your Working Directory
c. Set the jobname to the same name as that of the *.rdb file
d. Browse to the input file you generated in Step 3
e. Give your output file a descriptive name
f. Adjust parallel processing and memory settings as desired
g. Run

5. Look at the output file to see progress and monitor the run

6. Write “nonlinear” in a text file and save it as jobname.abt inside the Working Directory to cleanly interrupt the run and generate restart files when desired

The jobname.abt will appear briefly in the Working Directory

The output file will read the following:

Note that the jobname.abt interruption process is the exact process that ANSYS uses in the background when the Interrupt Solution button is pressed interactively in Mechanical

7. Copy all newly created files in Working Directory on the computing machine to the Solver Files Directory on the local machine

8. Back in the Mechanical application, highlight the Solution branch of the model tree, select Tools menu>Read Results Files… and navigate to the Solver Files Directory and read the updated *.rst file

After you have read in the results file, notice that the restart file generated from the interruption through the jobname.abt process appears as an option within the Mechanical interface under Analysis Settings

9. Review intermediate results to determine if analysis should continue or if adjustments need to be made

10. Repeat entire process to continue analysis using the new current loadstep and substep

Happy solving!

Here are some useful Help Documentation sections in ANSYS 15 for your reference:

• Understanding Solving:
• help/wb_sim/ds_Solving.html
• Mechanical APDL: Multiframe Restart:
• help/ans_bas/Hlp_G_BAS3_12.html#BASmultrestmap52199

## Video Tips: Create and Display Custom Units in ANSYS CFD-Post

By: Susanna Young

ANSYS CFD-Post is a powerful tool capable of post-processing results from multiple ANSYS tools including FLUENT, CFX, and Icepak. There are almost endless customizable options in ANSYS CFD-Post. This is a short video demonstrating how to create and display a set of custom units within the tool. Stay tuned for additional videos on tips for more effective post-processing in ANSYS CFD-Post.

## ANSYS Remote Solve Manager (RSM): Answers to Some Frequently Asked Questions

For you readers out there that use the ANSYS Remote Solve Manager (RSM) and have had one or all of the below questions, this post might just be for you!

1. What actually happens after I submit my job to RSM?
2. Where are the files needed to run the solve go?
3. How do the files get returned to the client machine, or do they?
4. What if something goes wrong with my solve or in the RSM file downloading process, is there any hope of recovery?
5. Are there any recommendations out there for how best to use RSM?

If your question is, how do I setup RSM as a user? You answers are here from a post by Ted Harris. The post today is a deeper dive into RSM.

The answers to questions 1 through 3 above are really only necessary if you would like to know the answer to question 4. My reason for giving you a greater understanding of the RSM process is so that you can do a better job of troubleshooting should your RSM job run into an issue.  Also, please note that this process is specifically for an RSM job submitted for ANSYS Mechanical. I have not tested this yet for a fluid flow run.

## What happens when a job gets submitted to RSM?

The following will answer questions 1-3 above.

When a job is run locally (on your machine), ANSYS uses the Solver Files Directory to store and update data. That folder can be found by right clicking on the Solution branch in the Model tree and selecting Open Solver Files Directory.

The project directory will be opened and you can see all of the existing files stored for your particular solution:

When a job gets submitted to RSM, the files that are stored in the above folder will be transferred to a series of two temporary directories. One temporary directory on the client side (where you launched the job from) and one temporary directory on the compute server side (where the numbers get crunched).

After you hit solve for a remote solve, you will notice that your project solver directory gets emptied. Those files are transferred to a temporary directory under the _ProjectScratch directory:

Next, these files get transferred to a temporary directory on the compute server. The files in the _ProjectScratch directory will remain there but the folder will not be updated again until the solve is interrupted or finished.

You can find the location of the compute server temporary directory by looking at the output log in the RSM queueing interface:

If you navigate to that directory on your compute server, you will see all of the necessary files needed to run. Depending on your IT structure, you may or may not have access to this directory, but it is there.

Here is a graphical overview of the route that your files will experience during the RSM solve process.

Once your run is completed or you have interrupted it to review intermediate results and your results have been downloaded and transferred to the solver files folder, both of the temporary directories get cleaned up and removed. I have just outlined the basic process that goes on behind the scenes when you have submitted a job to RSM.

## What if something goes wrong with my RSM job? Can I recover my data and re-read it into Workbench?

Recently, I ran into a problem with one of my RSM jobs that resulted in me losing all of the data that had been generated during a two day run. The exact cause of this problem I haven’t determined but it did force me to dive into the RSM process and discover what I am sharing with you today. By pin-pointing and understanding what goes on after the job is submitted to RSM, I did determine that it can be possible to recover data, but only under certain circumstances and setup.

First, if you have the “Delete Job Files in Working Directory” box checked in the compute server properties menu accessed from the RSM queue interface (see below) and RSM sees your job as being completed, the answer to the above question is no, you will not be able to recover your data. Essentially, because the compute server is cleaned up and the temporary directory gets deleted, the files are lost.

To avoid lost data and prepare for such a catastrophe, my recommendation is that you or your IT department, uncheck the “Delete Job Files in Working Directory” box. That way, you have a backup copy of your files stored on the server that you can delete later when you are sure you have all of your files safely transferred to your solver files folder within your project directory structure.

The downside to having this box unchecked is that you have to manually cleanup your server. Your IT department might not like, or even allow you to do this because it could clutter your server if you do not stay on top of things. But, it could be worth the safety net.

As for getting your data back into Workbench, you will need to manually copy the files on the compute server to your solver files folder in your Workbench project directory structure. I explained how to access this folder at the beginning of this post. Once you have copied those files, back in the Mechanical application, with the Solution branch of your model tree highlighted, selects Tools>Read Results Files… (see below graphic), navigate to your solver files directory, select the *.rst file and read it in.

Once the results file is read in, you should see whatever information is available.

## Recommendations

• Though it is possible to run concurrent RSM jobs from the same project, my recommendation is to only run one RSM job at a time from the same project in order to avoid communication or licensing holdups

• Unless you are confident that you will not ever need to recover files, consider unchecking the “Delete Job Files in Working Directory” box in the compute server properties menu.

• Note: if you are not allowed access to your compute server temporary directories, you should probably consult your IT department to get approval for this action.

• Caution: if you uncheck this box, be sure that you stay on top cleaning up your compute server once you have your files successfully downloaded

• Depending on your network speed, when your results files get large, >15GB, be prepared to wait for upload and download times. There is likely activity, but you might not be able to “see” it in the progress information on the RSM output feed. Be patient or work outside of RSM using a batch MAPDL process.

• Avoid hitting the “Interrupt Solution” command more than once. I have not verified this, but I believe this can cause mis-communication between the compute server and local machine temporary directories which can cause RSM to think that there are no files associated with your run to be transferred.

## 3D Printer/Scanner Bundles Now Available

We are very excited to offer 3 different 3D Printer and Scanner bundles to our customers. Each contain a variety of products to help you achieve optimal results.  They are all products that we use here at PADT everyday.  It is everything you need for 3D Scanning & Printing that works!

Geomagic Capture Scanner and Mojo 3D printer bundle
Enjoy a commercial level 3D scanner and printer combo at an affordable entry level price.  This bundle starts with a Geomagic Capture Scanner coupled with Geomagic Wrap software enabling users to easily and accurately scan a physical model and transform point cloud data, probe data and imported 3D formats (STL, OBJ, etc.) into 3d polygon meshes for use in manufacturing, design and analysis. Because we know your time is valuable, this bundle also includes a geoCUBE computer workstation built specifically for handling the large number of data points typically encountered when performing scanning.  Finally, print your model in your choice of a variety of colors on ABSplus using your Mojo 3D printer. With this bundle you can easily go from scan to print with for only $25,899. Training is included. Departmental Desktop Printing Solutions The Mojo 3D printer from Stratasys is a great solution to start making quality, duarable 3D models – right out of the box. And now the price makes it even more affordable to have a 3D printer in every department. A Mojo 3D printer with material package and Support Cleaning system now starts at$5,999 but we are offering a bundle discount for all your departmental needs with prices starting at $28,995 for 5 and$57,490 for 10 Mojos.  Everything you need to get a 3D printer on every floor!

Entry-level Reverse Engineering and Printing
This entry-level reverse engineering package starts with a Geomagic Capture scanner with Geomagic Wrap software so that you can seamlessly scan and process data. Construct a usable CAD model from your scan data using SpaceClaim’s geometry creation tools.  Everything runs on the included geoCUBE computer workstation that efficiently handles the large amounts of data produced.  When you are ready to print your design,  you can use the included uPrint se Plus.  This all-in-one solution helps you save on design time and reduce outsourcing costs by bringing it all in-house starting at \$41,400 including training.

PADT Colorado is excited to be partnering again with Alignex for a 3D printer demo/happy hour at their upcoming networking event.

The event is from 10 am to 6pm and will feature guest speakers discussing the latest in engineering and design productivity.  PADT will be on site to discuss 3D printing during their happy hour from 5 to 6pm.

## Four Events to Help Celebrate Manufacturing in Arizona

The month of October in Arizona is Manufacturer’s Month. Part of the Arizona Commerce Authoritie’s RevAZ program, this month of celebrations is an opportunity for those of us who make stuff, or support people to make stuff, to spread the word that the manufacturing community is robust, diverse, and has a major impact on the local economy.

PADT is attending three events, and hosting the closing event for the month. We hope to run in to you at the first three, and consider this the first of many invitations to join us for an open house and celebration on October 30th.

The three events open to the public are:

October 3rd, 2014 – 10:00 to 2:00
National Manufacturing Day Open House at AzAMI

The Arizona Advanced Manufacturing Institute (AzAMI) at Mesa Community College (MCC) is celebrating National Manufacturing Day by opening its doors to the community. Guided Tours of its enhanced machining, processing and additive manufacturing labs will be offered between 10am and 2pm.

1833 W. Southern Ave
Mesa, AZ 85202

Check out the event details here.

October 15th, 2014 – 12:30 to 6:30

AZTC Southern Arizona Tech + Business Expo: Where Technology and Manufacturing Connect

The Southern Arizona Tech + Business Expo is the regions premier showcase event for Manufacturing Month. Working in collaboration with the Southern Arizona Manufacturing Partners (SAMP), The Arizona Manufacturing Council (AMC), the RevAZ Program of the Arizona Commerce Authority, and the University of Arizona’s Tech Launch Arizona; the Expo will host informative panel discussions on strategies to grow your business faster.

The Westin La Paloma Resort
3800 E. Sunrise Drive
Tucson, AZ 85718

Check out the event details and register here.

October 30th, 2014 – 4:00 to 7:00pm

Celebrating Arizona Manufacturing

PADT is proud to host the closing celebration for Arizona Manufacturer’s Month. A variety of companies and organizations will be exhibiting their activities in the future of manufacturing. Visitors will get a chance to see some of the more advanced applications of manufacturing in the state as well as tours of the PADT facility.

Food and drinks will be provided along with great opportunities to network and get to know the community a little better.