Phoenix Business Journal: ​What the heck is happening? There is too much going on in Arizona tech

We have a problem now in the Arizona Tech community – there is just too much going on.  In “What the heck is happening? There is too much going on in Arizona tech” we look at how why this is a good thing and what we can to make it even better.

Posted in Publications | Tagged , , , | Leave a comment

ANSYS 18 – AIM Enhancements Webinar

We here at PADT are excited to share with you the updates that ANSYS 18 brings to the table for AIM: The easy-to-use, upfront simulation tool for all design engineers.

ANSYS AIM is a single GUI, multiple physics tool with advanced ANSYS technology under the hood. It requires minimal training and is interoperable with a wide range of ANSYS simulation products.

Join PADT’s application engineer Tyler Smith as he covers the new features and capabilities available in this new release, including:

  • Magnetic frequency response
  • One-way FSI for shell structures
  • Model transfer to Fluent
  • One-way magnetic-thermal coupling
  • and many more!

ANSYS AIM is a perfect tool for companies performing simulation with a CAD embedded tool, design engineers at companies using high end simulation, and even companies who have yet to take the plunge into the world of simulation.

Register for this webinar today and learn how you can take advantage of the easy-to-use, yet highly beneficial capabilities of ANSYS AIM.

Posted in ANSYS R 18 | Tagged , , , , , , , , , | Leave a comment

License Usage and Reporting with ANSYS License Manager Release 18.0

Remember the good old days of having to peruse through hundreds and thousands of lines of text in multiple files to see ANSYS license usage information?  Trying to hit Ctrl+F and search for license names.  Well those days were only about a couple months ago and they are over…well for the most part.

With the ANSYS License Manager Release 18.0, we have some pretty nifty built in license reporting tools that help to extract information from the log files so the administrator can see anything from current license usage to peak usage and even any license denials that occur.  Let’s take a look at how to do this:

First thing is to open up the License Management Center:

  • In Windows you can find this by going to Start>Programs>ANSYS Inc License Manager>ANSYS License Management Center
  • On Linux you can find this in the ansys directory /ansys_inc/shared_files/licensing/start_lmcenter

This will open up your License Manager in your default browser as shown below.   For the reporting just take a look at the Reporting Section.  We’ll cover each of these 4 options below.

License Management Center at Release 18.0

License Reporting Options

 

 

VIEW CURRENT LICENSE USAGE

As the title says, this is where you’ll go to see a breakdown of the current license usage.  What is great is that you can see all the licenses that you have on the server, how many licenses of each are being used and who is using them (through the color of the bars).  Please note that PADT has access to several ANSYS Licenses.  Your list will only include the licenses available for use on your server.

Scrolling page that shows Current License Usage and Color Coded Usernames

You can also click on Show Tabular Data to see a table view that you can then export to excel if you wanted to do your own manipulation of the data.

Tabular Data of Current License Usage – easy to export

 

 

 

VIEW LICENSE USAGE HISTORY

In this section you will be able to not only isolate the license usage to a specific time period, you can also filter by license type as well.  You can use the first drop down to define a time range, whether that is the previous 1 month, 1 year, all available or even your own custom time range

Isolate License Usage to Specific Time Period

Once you hit Generate you will be able to then isolate by license name as shown below.  I’ve outlined some examples below as well.  The axis on the left shows number of licenses used.

Filter Time History by License Name

1 month history of ANSYS Mechanical Enterprise

 1 month history of ANSYS CFD

Custom Date Range history of ANSYS SpaceClaim Direct Modeler

 

 

 

VIEW PEAK LICENSE USAGE

This section will allow you to see what the peak usage of a particular license during a particular time period and filter it based on data range.  First step is to isolate to a date range as before, for example 1 month.  Then you can select which month you want to look at data for.

Selecting specific month to look at Peak License Usage

Then you can isolate the data to whether or not you want to look at an operational period of 24/7, Monday to Friday 24/5 or even Monday to Friday 9am-5pm.  This way you can isolate license usage between every day of the week, working week or normal working hours in a week. Again, axis on left shows number of licenses.

Isolating data to 24/7, Weekdays or Weekday Working Hours

 Peak License Usage in March 2017 of ANSYS Mechanical Enterprise (24/7)

Peak License Usage in February 2017 of ANSYS CFD (Weekdays Only)

 

 

VIEW LICENSE DENIALS

If any of the users who are accessing the License Manager get license denials due to insufficient licenses or for any other reason, this will be displayed in this section.  Since PADT rarely, if ever, gets License Denials, this section is blank for us.  The procedure is identical to the above sections – it involves isolating the data to a time period and filtering the data to your interested quantities.

Isolate data with Time Period as other sections

 

 

Although these 4 options doesn’t include every conceivable filtering method, this should allow managers and administrators to filter through the license usage in many different ways without needing to manually go through all the log files.   This is a very convenient and easy set of options to extract the information.

Please let us know if you have any questions on this or anything else with ANSYS.

Posted in News, The Focus | Tagged , , , | Leave a comment

DesignCon 2017 Trends in Chip, Board, and System Design


Considered the “largest gathering of chip, board, and systems designers in the country,” with over 5,000 attendees this year and over 150 technical presentations and workshops, DesignCon exhibits state of the art trends in high-speed communications and semiconductor communities.

Here are the top 5 trends I noticed while attending DesignCon 2017:

1. Higher data rates and power efficiency.

This is of course a continuing trend and the most obvious. Still, I like to see this trend alive and well because I think this gets a bit trickier every year. Aiming towards 400 Gbps solutions, many vendors and papers were demonstrating 56 Gbps and 112 Gbps channels, with no less than 19 sessions with 56 Gbps or more in the title. While IC manufacturers continue to develop low-power chips, connector manufacturers are offering more vented housings as well as integrated sinks to address thermal challenges.

2. More conductor-based signaling.

PAM4 was everywhere on the exhibition floor and there were 11 sessions with PAM4 in the title. Shielded twinaxial cables was the predominant conductor-based technology such as Samtec’s Twinax Flyover and Molex’s BiPass.

A touted feature of twinax is the ability to route over components and free up PCB real estate (but there is still concern for enclosing the cabling). My DesignCon 2017 session, titled Replacing High-Speed Bottlenecks with PCB Superhighways, would also fall into this category. Instead of using twinax, I explored the idea of using rectangular waveguides (along with coax feeds), which you can read more about here. I also offered a modular concept that reflects similar routing and real estate advantages.

3. Less optical-based signaling.

Don’t get me wrong, optical-based signaling is still a strong solution for high-speed channels. Many of the twinax solutions are being designed to be compatible with fiber connections and, as Teledyne put it in their QPHY-56G-PAM4 option release at DesignCon, Optical Internetworking Forum (OIF) and IEEE are both rapidly standardizing PAM4-based interfaces. Still, the focus from the vendors was on lower cost conductor-based solutions. So, I think the question of when a full optical transition will be necessary still stands.
With that in mind, this trend is relative to what I saw only a couple years back. At DesignCon 2015, it looked as if the path forward was going to be fully embracing optical-based signaling. This year, I saw only one session on fiber and, as far as I could tell, none on photonic devices. That’s compared to DesignCon 2015 with at least 5 sessions on fiber and photonics, as well as a keynote session on silicon photonics from Intel Fellow Dr. Mario Paniccia.

4. More Physics-based Simulations.

As margins continue to shrink, the demand for accurate simulation grows. Dr. Zoltan Cendes, founder of Ansoft, shared the difficulties of electromagnetic simulation over the past 40+ years and how Ansoft (now ANSYS) has improved accuracy, simplified the simulation process, and significantly reduced simulation time. To my personal delight, he also had a rectangular waveguide in his presentation (and I think we were the only two). Dr. Cendes sees high-speed electrical design at a transition point, where engineers have been or will ultimately need to place physics-based simulations at the forefront of the design process, or as he put it, “turning signal integrity simulation inside out.” A closer look at Dr. Cendes’ keynote presentation can be found in DesignNews.

5. More Detailed IC Models.

This may or may not be a trend yet, but improving IC models (including improved data sheet details) was a popular topic among presenters and attendees alike; so if nothing else it was a trend of comradery. There were 12 sessions with IBIS-AMI in the title. In truth, I don’t typically attend these sessions, but since behavioral models (such as IBIS-AMI) impact everyone at DesignCon, this topic came up in several sessions that I did attend even though they weren’t focused on this topic. Perhaps with continued development of simulation solutions like ANSYS’ Chip-Package-System, Dr. Cende’s prediction will one day make a comprehensive physics-based design (to include IC models) a practical reality. Until then, I would like to share an interesting quote from George E. P. Box that was restated in one of the sessions: “Essentially all models are wrong, but some are useful.” I think this is good advice that I use for clarity in the moment and excitement for the future.

By the way, the visual notes shown above were created by Kelly Kingman from kingmanink.com on the spot during presentations. As an engineer, I was blown away by this. I have a tendency to obsess over details but she somehow captured all of the critical points on the fly with great graphics that clearly relay the message. Amazing!

Posted in The Focus | Tagged , , , , , | Leave a comment

How To Update The Firmware Of An Intel® Solid-State Drive DC P3600

How To Update The Firmware Of An Intel® Solid-State Drive DC P3600 in four easy steps!

The Dr. says to keep that firmware fresh! so in this How To blog post I illustrate to you how to verify and/or update the firmware on a 1.2TB  Intel® Solid-State Drive DC 3600 Series NVMe MLC card.

CUBE Workstation Specifications – The Tester

PADT, Inc. – CUBE w32i Numerical Simulation Workstation

  • 2 x 16c @2.6GHz/ea. (INTEL XEON e5-2697A V4 CPU), 40M Cache, 9.6GT, 145 Watt/each
  • Dual Socket Super Micro X10DAi motherboard
  • 8 x 32GB DDR4-2400MHz ECC REG DIMM
  • 1 x NVIDIA QUADRO M2000 – 4GB GDDR5
  • 1 x  Intel® DC P3600 1.2TB, NVMe PCIe 3.0, MLC AIC 20nm
  • Windows 7 Ultimate Edition 64-bit

Step 1: Prepping

Check for and download the latest downloads for the Intel® Solid-State DC 3600 here: https://downloadcenter.intel.com/product/81000/Intel-SSD-DC-P3600-Series

You will need the latest downloads of the:

Intel® SSD Data Center Family for NVMe Drivers
  • Intel® Solid State Drive Toolbox

  • Intel® SSD Data Center Tool

  • Intel® SSD Data Center Family for NVMe Drivers

Step 2: Installation

After instaling, the Intel® Solid State Drive Toolbox and the Intel® SSD Data Center Tool reboot the workstation and move on to the next step.

INTEL SSD Toolbox

INTEL SSD Toolbox

INTEL SSD Toolbox Install

Step 3: Trust But Verify

Check the status of the 1.2TB NVMe card by running the INTEL SSD DATA Center Tool. Next, I will be using the Windows 7 Ultimate 64-bit version for the operating system. Running the INTEL DATA CENTER TOOLS  within an elevated command line prompt.

Right-Click –> Run As…Administrator
Command Line Text: isdct show –intelssd

INTEL DATA Center Command Line Tool

INTEL DATA Center Command Line Tool

As the image indicates below the firmware for this 1.2TB NVMe card is happy and it’s firmware is up to date! Yay!

If you have more than one SSD take note of the Drive Number.

  • Pro Tip – In this example the INTEL DC P3600 is Drive number zero. You can gather this information from the output syntax. –> Index : 0

Below is what the command line output text looks like while the firmware process is running.

C:\isdct >isdct.exe load –intelssd 0 WARNING! You have selected to update the drives firmware! Proceed with the update? (Y|N): y Updating firmware…The selected Intel SSD contains current firmware as of this tool release.
isdct.exe load –intelssd 0 WARNING! You have selected to update the drives firmware! Proceed with the update? (Y|N): n Canceled.
isdct.exe load –f –intelssd 0 Updating firmware… The selected Intel SSD contains current firmware as of this tool release.
isdct.exe load –intelssd 0 WARNING! You have selected to update the drives firmware! Proceed with the update? (Y|N): y Updating firmware… Firmware update successful.

Step 4: Reboot Workstation

The firmware update process has been completed.

shutdown /n

Posted in The Focus | Tagged , , , , | Leave a comment

Using External Data in ANSYS Mechanical to Tabular Loads with Multiple Variables

ANSYS Mechanical is great at applying tabular loads that vary with an independent variable. Say time or Z.  What if you want a tabular load that varies in multiple directions and time. You can use the External Data tool to do just that. You can also create a table with a single variable and modify it in the Command Editor.

In the Presentation below, I show how to do all of this in a step-by-step description.

PADT-ANSYS-Tabular-Loading-ANSYS-18

You can also download the presentation here.

Posted in The Focus | Tagged , , , , , , | Leave a comment

Phoenix Business Journal: ​Emoticons as a way to communicate in business

It started with kids and texting, but now emoticons can be a possitive part of business communciation :-O   In “Emoticons as a way to communicate in business” I look at soem examples and why I think its a good thing ;-).

Posted in Publications | Tagged , , , | Leave a comment

AZ Business Magazine: It’s time for Arizona startups to grow up

At some point it’s time to get real.  “It’s time for Arizona startups to grow up” looks at how we need to stop focusing on getting ready for success and start achieving it.  We were pleased to be the first article in AZBigMedia.com‘s new “Silicon Desert Insider” blog shares my thoughts on how its time for some tough love. Brought to you by AZ Business Magazine, it focuses on the technology side of business in the Phoenix area.

Posted in Publications | Tagged , , , , | Leave a comment

Experiences with Developing a “Somewhat Large” ACT Extension in ANSYS

With each release of ANSYS the customization toolkit continues to evolve and grow.  Recently I developed what I would categorize as a decent sized ACT extension.    My purpose in this post is to highlight a few of the techniques and best practices that I learned along the way.

Why I chose C#?

Most ACT extensions are written in Python.  Python is a wonderfully useful language for quickly prototyping and building applications, frankly of all shapes and sizes.  Its weaker type system, plethora of libraries, large ecosystem and native support directly within the ACT console make it a natural choice for most ACT work.  So, why choose to move to C#?

The primary reasons I chose to use C# instead of python for my ACT work were the following:

  1. I prefer the slightly stronger type safety afforded by the more strongly typed language. Having a definitive compilation step forces me to show my code first to a compiler.  Only if and when the compiler can generate an assembly for my source do I get to move to the next step of trying to run/debug.  Bugs caught at compile time are the cheapest and generally easiest bugs to fix.  And, by definition, they are the most likely to be fixed.  (You’re stuck until you do…)
  2. The C# development experience is deeply integrated into the Visual Studio developer tool. This affords not only a great editor in which to write the code, but more importantly perhaps the world’s best debugger to figure out when and how things went wrong.   While it is possible to both edit and debug python code in Visual Studio, the C# experience is vastly superior.

The Cost of Doing ACT Business in C#

Unfortunately, writing an ACT extension in C# does incur some development cost in terms setting up the development environment to support the work.  When writing an extension solely in Python you really only need a decent text editor.  Once you setup your ACT extension according to the documented directory structure protocol, you can just edit the python script files directly within that directory structure.  If you recall, ACT requires an XML file to define the extension and then a directory with the same name that contains all of the assets defining the extension like scripts, images, etc…  This “defines” the extension.

When it comes to laying out the requisite ACT extension directory structure on disk, C# complicates things a bit.  As mentioned earlier, C# involves a compilation step that produces a DLL.  This DLL must then somehow be loaded into Mechanical to be used within the extension.  To complicate things a little further, Visual Studio uses a predefined project directory structure that places the build products (DLLs, etc…) within specific directories of the project depending on what type of build you are performing.   Therefore the compiled DLL may end up in any number of different directories depending on how you decide to build the project.  Finally, I have found that the debugging experience within Visual Studio is best served by leaving the DLL located precisely wherever Visual Studio created it.

Here is a summary list of the requirements/problems I encountered when building an ACT extension using C#

  1. I need to somehow load the produced DLL into Mechanical so my extension can use it.
  2. The DLL that is produced during compilation may end up in any number of different directories on disk.
  3. An ACT Extension must conform to a predefined structural layout on the filesystem. This layout does not map cleanly to the Visual studio project layout.
  4. The debugging experience in Visual Studio is best served by leaving the produced DLL exactly where Visual Studio left it.

The solution that I came up with to solve these problems was twofold.

First, the issue of loading the proper DLL into Mechanical was solved by using a combination of environment variables on my development machine in conjunction with some Python programming within the ACT main python script.  Yes, even though the bulk of the extension is written in C#, there is still a python script to sort of boot-load the extension into Mechanical.  More on that below.

Second, I decided to completely rebuild the ACT extension directory structure on my local filesystem every time I built the project in C#.  To accomplish this, I created in visual studio what are known as post-build events that allow you to specify an action to occur automatically after the project is successfully built.  This action can be quite generic.  In my case, the “action” was to locally run a python script and provide it with a few arguments on the command line.  More on that below.

Loading the Proper DLL into Mechanical

As I mentioned above, even an ACT extension written in C# requires a bit of Python code to bootstrap it into Mechanical.  It is within this bit of Python that I chose to tackle the problem of deciding which dll to actually load.  The code I came up with looks like the following:

Essentially what I am doing above is querying for the presence of a particular environment variable that is on my machine.  (The assumption is that it wouldn’t randomly show up on end user’s machine…) If that variable is found and its value is 1, then I determine whether or not to load a debug or release version of the DLL depending on the type of build.  I use two additional environment variables to specify where the debug and release directories for my Visual Studio project exist.  Finally, if I determine that I’m running on a user’s machine, I simply look for the DLL in the proper location within the extension directory.  Setting up my python script in this way enables me to forget about having to edit it once I’m ready to share my extension with someone else.  It just works.

Rebuilding the ACT Extension Directory Structure

The final piece of the puzzle involves rebuilding the ACT extension directory structure upon the completion of a successful build.  I do this for a few different reasons.

  1. I always want to have a pristine copy of my extension laid out on disk in a manner that could be easily shared with others.
  2. I like to store all of the various extension assets, like images, XML files, python files, etc… within the Visual Studio Project. In this way, I can force the project to be out of date and in need of a rebuild if any of these files change.  I find this particularly useful for working with the XML definition file for the extension.
  3. Having all of these files within the Visual Studio Project makes tracking thing within a version control system like SVN or git much easier.

As I mentioned before, to accomplish this task I use a combination of local python scripting and post build events in Visual Studio.  I won’t show the entire python code, but essentially what it does is programmatically work through my local file system where the C# code is built and extract all of the files needed to form the ACT extension.  It then deletes any old extension files that might exist from a previous build and lays down a completely new ACT extension directory structure in the specified location.  The definition of the post build event is specified within the project settings in Visual Studio as follows:

As you can see, all I do is call out to the system python interpreter and pass it a script with some arguments.  Visual Studio provides a great number of predefined variables that you can use to build up the command line for your script.  So, for example, I pass in a string that specifies what type of build I am currently performing, either “Debug” or “Release”.  Other strings are passed in to represent directories, etc…

The Synergies of Using Both Approaches

Finally, I will conclude with a note on the synergies you can achieve by using both of the approaches mentioned above.  One of the final enhancements I made to my post build script was to allow it to “edit” some of the text based assets that are used to define the ACT extension.  A text based asset is something like an XML file or python script.  What I came to realize is that certain aspects of the XML file that define the extension need to be different depending upon whether or not I wish to debug the extension locally or release the extension for an end user to consume.  Since I didn’t want to have to remember to make those modifications before I “released” the extension for someone else to use, I decided to encode those modifications into my post build script.  If the post build script was run after a “debug” build, I coded it to configure the extension for optimal debugging on my local machine.  However, if I built a “release” version of the extension, the post build script would slightly alter the XML definition file and the main python file to make it more suitable for running on an end user machine.   By automating it in this way, I could easily build for either scenario and confidently know that the resulting extension would be optimally configured for the particular end use.

Conclusions

Now that I have some experience in writing ACT extensions in C# I must honestly say that I prefer it over Python.  Much of the “extra plumbing” that one must invest in in order to get a C# extension up and running can be automated using the techniques described within this post.  After the requisite automation is setup, the development process is really straightforward.  From that point onward, the increased debugging fidelity, added type safety and familiarity a C based language make the development experience that much better!  Also, there are some cool things you can do in C# that I’m not 100% sure you can accomplish in Python alone.  More on that in later posts!

If you have ideas for an ACT extension to better serve your business needs and would like to speak with someone who has developed some extensions, please drop us a line.  We’d be happy to help out however we can!

 

Posted in The Focus | Tagged , , , , , , | Leave a comment

Phoenix Business Journal: ​How technology can bring people together and bring down barriers

We talk about technology all the time and how it impacts our daily lives, good and bad.  “​How technology can bring people together and bring down barriers” takes a look at the social impact of technology when it comes to creating understanding and community.

Posted in Publications | Tagged , , , | Leave a comment

Phoenix Business Journal: Installing a metal 3-D printer was a lesson on working with regulators

While installing our new metal 3D Printer we learned a couple of important lessons on working with local inspectors.  In “Installing a metal 3-D printer was a lesson on working with regulators” we share what we captured.

Posted in Additive Manufacturing, Publications | Tagged , , , , | Leave a comment

inBusiness: Riding the Wave of 3-D Printing

The Greater Phoenix inBusiness magazine just did a profile on PADT and co-founder Eric Miller.  “Eric Miller: Riding the Wave of 3-D Printing” gives some history and insite into what makes PADT unique.  It even includes some fun facts about the company.

Posted in Publications | Tagged , , , , | Leave a comment

Kids, Race Cars, and Scanned Turtle Shells: PADT’s 2017 SciTech Festival Open House

There is something about a kid running down a hallway screaming “mom, you HAVE to see this!” #openhousegoals.

Last night was our annual event where we open up the doors of PADT with a family oriented event sharing what we engineers do.  We also invited some students from high school and University to share their engineering activities.  With over 250 attendees and more than one excited kid running down the hall, we can safely call it a success.

Attendies were able to see our 3D Printing demo room including dozens of real 3D printed parts, learn about engineering, explore how 3D Printing works, and check out our new metal 3D Printer. They were also able to learn about school projects like the ASU Formula SAE race car as well as a prosthetic hand project and research into cellular structures in nature from BASIS Chandler.

Oh, and there was Pizza.

Pictures speak louder than words, so here is a galary of images from the event.

Posted in Events, News | Tagged , , , , , , | Leave a comment

Connection Groups and Your Sanity in ANSYS Mechanical

You kids don’t know how good you have it with automatic contact creation in Mechanical.  Back in my day, I’d have to use the contact wizard in MAPDL or show off my mastery of the ESURF command to define contacts between parts.  Sure, there were some macros somewhere on the interwebs that would go through and loop for surfaces within a particular offset, but for the sake of this stereotypical “old-tyme” rant, I didn’t use them (I actually didn’t, I was just TOO good at using ESURF to need anyone else’s help).

Image result for old tyme

Hey, it gets me from point A to B

In Mechanical contact is automatically generated based on a set of rules contained in the ‘Connection Group’ object:

image

It might look a little over-whelming, but really the only thing you’ll need to play around with is the ‘Tolerance Type’.  This can either ‘Slider’ or ‘Value’ (or use sheet thickness if you’re working with shells).  What this controls is the face offset value for which Mechanical will automatically build contact.  So in the picture shown above faces that are 5.9939E-3in apart will automatically have contact created.  You can play around with the slider value to change what the tolerance

image image image

As you can see, the smaller the tolerance slider the larger the ‘acceptable’ gap becomes.  If you change the Tolerance Type to be ‘Value’ then you can just directly type in a number.

Typically the default values do a pretty good job automatically defining contact.  However, what happens if you have a large assembly with a lot of thin parts?  Then what you run into is non-sensical contact between parts that don’t actually touch (full disclosure, I actually had to modify the contact settings to have the auto-generated contact do something like this…but I have seen this in other assemblies with very thin/slender parts stacked on top of each other):

image

In the image above, we see that contact has been defined between the bolt head and a plate when there is clearly a washer present.  So we can fix this by going in and specifying a value of 0, meaning that only surfaces that are touching will have contact defined.  But now let’s say that some parts of your assembly aren’t touching (maybe it’s bad CAD, maybe it’s a welded assembly, maybe you suppressed parts that weren’t important).

image

The brute force way to handle this would be to set the auto-detection value to be 0 and then go back and manually define the missing contacts using the options shown in the image above.  Or, what we could do is modify the auto-contact to be broken up into groups and apply appropriate rules as necessary.  The other benefit to this is if you’re working in large assemblies, you can retain your sanity by having contact generated region by region.   In the words of the original FE-guru, Honest Abe, it’s easier to manage things when they’re logically broken up into chunks.

image

Said No One Ever

Sorry…that was bad.  I figured in the new alt-fact world with falsely-attributed quotes to historical leaders, I might as well make something up for the oft-overlooked FE-crowd.

So, how do you go about implementing this?  Easy, first just delete the default connection group (right-mouse-click on it and select delete).  Next, just select a group of bodies and click the ‘Connection Group’ button:

image image image

In the image series above, I selected all the bolts and washers, clicked the connection group, and now I have created a connection group that will only automatically generate contact between the bolts and washers.  I don’t have to worry about contact being generated between the bolt and plate.  Rinse, lather, and repeat the process until you’ve created all the groups you want:

image

ALL the Connection Groups!

Now that you have all these connection groups, you can fine-tune the auto-detection rules to meet the ‘needs’ of those individual body groups.  Just zooming in on one of the groups:

image

By default, when I generate contact for this group I’ll get two contact pairs:

image image

While this may work, let’s say I don’t want a single contact pair for the two dome-like structures, but 2.  That way I can just change the behavior on the outer ‘ring’ to be frictionless and force the top to be bonded:

image

I modified the auto-detection tolerance to be a user-defined distance (note that when you type in a number and move your mouse over into the graphics window you will see a bulls-eye that indicates the search radius you just defined).  Next, I told the auto-detection not to group any auto-detected contacts together.  The result is I now get 3 contact pairs defined:

image image image

Now I can just modify the auto-generated contacts to have the middle-picture shown in the series above to be frictionless.  I could certainly just manually define the contact regions, but if you have an assembly of dozens/hundreds of parts it’s significantly easier to have Mechanical build up all the contact regions and then you just have to modify individual contact pairs to have the type/behavior/etc you want (bonded, frictionless, symmetric, asymmetric, custom pinball radius, etc).  This is also useful if you have bodies that need to be connected via face-to-edge or edge-to-edge contact (then you can set the appropriate priority as to which, if any of those types should be preserved over others).

So the plus side to doing all of this is that after any kind of geometry update you shouldn’t have much, if any, contact ‘repair’ to do.  All the bodies/rules have already been fine tuned to automatically build what you want/need.  You also know where to look to modify contacts (although using the ‘go to’ functionality makes that pretty easy as well).  That way you can define all these connection groups, leave everything as bonded and do a preliminary solve to ensure things look ‘okay’.  Then go back and start introducing some more reality into the simulation by allowing certain regions to move relative to each other.

The downside to doing your contacts this way is you risk missing an interface because you’re now defining the load path.  To deal with that you can just insert a dummy-modal environment into your project, solve, and check that you don’t have any 0-Hz modes.

Posted in The Focus | Tagged , , , | Leave a comment

Exploring High-Frequency Electromagnetic Theory with ANSYS HFSS

I recently had the opportunity to present an interesting experimental research paper at DesignCon 2017, titled Replacing High-Speed Bottlenecks with PCB Superhighways. The motivation behind the research was to develop a new high-speed signaling system using rectangular waveguides, but the most exciting aspect for me personally was salvaging a (perhaps contentious) 70 year old first-principles electromagnetic model. While it took some time to really understand how to apply the mathematics to design, their application led to an exciting convergence of theory, simulation, and measurement.

One of the most critical aspects of the design was exciting the waveguide with a monopole probe antenna. Many different techniques have been developed to match the antenna impedance to the waveguide impedance at the desired frequency, as well as increase the bandwidth. Yet, all of them rely on assumptions and empirical measurement studies. Optimizing a design to nanometer precision empirically would be difficult at best and even if the answer was found it wouldn’t inherently reveal the physics. To solve this problem, we needed a first-principles model, a simulation tool that could quickly iterate designs accurately, and some measurements to validate the simulation methodology.

A rigorous first-principles model was developed by Robert Collin in 1960, but this solution has since been forgotten and replaced by simplified rules. Unfortunately, these simplified rules are unable to deliver an optimal design or offer any useful insight to the critical parameters. In fairness, Collin’s equations are difficult to implement in design and validating them with measurement would be tedious and expensive. Because of this, empirical measurements have been considered a faster and cheaper alternative. However, we wanted the best of both worlds… we wanted the best design, for the lowest cost, and we wanted the results quickly.

For this study, we used ANSYS HFSS to simulate our designs. Before exploring new designs, we first wanted to validate our simulation methodology by correlating results with available measurements. We were able to demonstrate a strong agreement between Collin’s theory, ANSYS HFSS simulation, and VNA measurement.

Red simulated S-parameters strongly correlated with blue measurements.

To perform a series of parametric studies, we swept thousands of antenna design iterations across a wide frequency range of 50 GHz for structures ranging from 50-100 guide wavelengths long. High-performance computing gave us the ability to solve return loss and insertion loss S-parameters within just a few minutes for each design iteration by distributing across 48 cores.

Sample Parametric Design Sweep

Finally, we used the lessons we learned from Collin’s equations and the parametric study to develop a new signaling system with probe antenna performance never before demonstrated. You can read the full DesignCon paper here. The outcome also pertains to RF applications in addition to potentially addressing Signal Integrity concerns for future high-speed communication channels.

Rules-of-thumb are important to fast and practical design, but their application can many times be limited. Competitive innovation demands we explore beyond these limitations but the only way to match the speed and accuracy of design rules is to use simulations capable of offering fast design exploration with the same reliability as measurement. ANSYS HFSS gave us the ability to, not only optimize our design, but also teach us about the physics that explain our design and allow us to accurately predict the behavior of new innovative designs.

Posted in The Focus | Tagged , , , , , , , , | Leave a comment