At one point I was doing a PhD in programming a custom FEM program written in Java. I ended up not completing that program, but have been using FEM for many years to design things.
One of the ways I play around with programming languages is try to make a small Finite Element solver. I think so far I've made them in Matlab, Python, JavaScript, and Rust. Maybe I made one in C or C++ as well.
Whenever I do, I realize that a lot of the barrier is getting good Matrix solving tools in that language. Rust was a bit difficult because I was forced to choose between 1 or 2 main linear algebra codes. However, these codes are not optimized and will not be as fast as the Fortran based codes that a lot of the Python libraries now reference (LAPACK and BLAS if I rembember correctly).
It comes down to how you store the information in the Matrix. Physical problems solvable by FEM are symmetric and linearly independent, or some tricks are done to make them so. They also end up being quite Sparse. Those Sparse solvers from LAPACK and BLAS, as well as all of the past optimizations that are probably made for dealing with cache size (I'm speculating here, I really don't know), really make a huge difference in the problem.
Last I touched the Rust libraries to make a simple code, I just stuck with non-sparse solvers. There's an enormous difference between storage size and computational effort to go through a non-sparse matrix in comparison, especially when the problem is large. I think past a few million degrees of freedom on typical mechanical engineering problems, it started to become unmanageable on a local cpu.
There are tons of solvers, there're a lot of libraries and SciPy, Fenics, most likely even MATLAB, Octave, FreeFEM++ all of them use LAPACK and BLAS (at least in their most performant flavours), all calling eventually the FORTRAN subroutines written decades ago. And yes, the tricky parts of this is also that how Fortran and C store 2D arrays, https://en.wikipedia.org/wiki/Row-_and_column-major_order.
So no one re-writes their own stuff, unless for academic purposes. I was on the verge of trying to extend the SciPy library to support block matrices 4 years ago for my thesis, but was able to work around it because duck-typing had generalised quite enough to encapsulate even that.
Sparse and Full matrices have a totally different data structure. hence the difference in the solvers. Engineers and Mathematicians that want more performant solving, rely more on the modelling of the problem (physically or mathematically) than improving the solvers.The other option is hardware-assisted optimization. For full solvers, I suppose the AI/ML folk have come up with TPUs, also because they have different tolerances for floating point calculations in classifying an image compared with launching a satellite on Mars.
The most performant BLAS implementations are actually generally written in C and Assembly these days with a Fortran wrapper. So you can have a weird sandwich of C -> Fortran interface -> C in your code.
It's a widely-used and widely-studied method, so the popular implementations are pretty optimized and the toolkits extensive. Unless you have some fundamental insight/optimization specific to your problem, I don't think it's actually worth it to roll your own. Easier to just start with something open sourced and build a small module on top.
Nvidia is rolling FEM into Physx[1] - exciting stuff! GPU-accelerated FEM is going to be a game changer if they didn't cut too many corners in the pursuit of speed.
It depends. The difference vs traditional engineering software is in how much deviation from reality there is, and how much you can tolerate. Both of these depends on what exactly you're simulating and what sort of results you're looking for.
For example, Nvidia published a paper a while ago using Flex as a simulation environment for training AI to perform robotic tasks:
For those interested in FEM, I found out about Fenics a few years back [1], here's an extract from their homepage:
"FEniCS is a popular open-source (LGPLv3) computing platform for solving partial differential equations (PDEs). FEniCS enables users to quickly translate scientific models into efficient finite element code. With the high-level Python and C++ interfaces to FEniCS, it is easy to get started, but FEniCS offers also powerful capabilities for more experienced programmers. FEniCS runs on a multitude of platforms ranging from laptops to high-performance clusters."
I never got to use it because I've moved out of mechanical simulations a few years ago, but it definitely looks pretty sweet ;)
Even better, one of the guys involved with the code has a fantastic Youtube series on the Finite Element Method [1]
I used it in grad school, he's a great explainer, and he's addressing the talk to an audience of computer science students. It's long, of course, but it's spot on.
If anyone should be interested, I have since 2013 been working on building an integrated and unified simulation platform and GUI for FEA and multiphysics simulation codes, such as FEniCS, OpenFOAM, and SU2. The rather neat idea is to make it easy for users to learn one (easy) platform while allowing them to use essentially any simulation code underneath [1].
I used it extensively in my PhD thesis for solving nonlinear eigenvalue problems that arise from Maxwell's equations. It's a great tool, in fact one of the best tools for scientific computing I ever used.
We also used it extensively for teaching, since you can implement a lot of cool things without having to worry to deeply about programming or CS.
I'm a fan of Code_Aster at the moment.
It has a massive amount of documentation due to its provenance from the nuclear industry. The only downside (for me as an exclusive English speaker) is, it's all either in French or machine-translated (including functions and parameter names) which adds an extra layer of difficulty.
FEM is super important/useful in engineering and scientific disciplines. While the basic concepts are quite straight forward, all the details you need for developing useful engineering FEM tools make it really challenging. You can read a textbook and learn a lot, but in production tools you have to figure out how to handle things like: mesh generation, mesh adaptivity, coupled nonlinear equation support, different time-stepping/integration schemes, 3D, convective term stabilization, sparse matrix solvers, preconditioning, parallelization (e.g. MPI for supercomputers), and much much more. I work on an open source FEM framework called MOOSE (https://mooseframework.org/) and get to enjoy digging in this stuff every day!
In my Fluid Dynamics class we used Excel together with Finite Element Analysis to solve all kinds of problems. It was actually pretty cool. I remember specifically solving heat gradient problems in various shapes this way.
FEM os the workhorse of pretty much any engineering field that needs to use models expressed as partial differential equations, such as heat exchanges, mechanics (solid and fluid statics and Dynamics), electromagnetics, etc. It's hard to go through a day without using anything whose design process involved FEM analysis at some point.
While FEM is (rightfully) very popular, a lot of work on electromagnetism is done using finite-difference time-domain methods (FDTD) instead of finite-element.
I find that time domain is best for low-Q devices (antennas) while frequency domain for high -Q (filters). Though there are times when one will mesh a structure better than the other. Thin, double curved dielectrics with metal on both sides are the most difficult, especially antennas since you need to mesh out to 1/4 lambda.
I don’t think that will ever happen. The knowledge to develop the tools is too specialized and the user base too small. The same goes for any RF tools.
There's a lot of open source from the academic world (e.g. SALOME, ONELAB, FreeFEM). But maybe you think "tooling" in terms of integration in CAD packages (which are more lacking in the open source world than FEM packages arguably).
Yes. The integration and polish is what make all the difference. The commercial EDA tools are light years ahead of the open source, and will probably stay that way.
That is part of the reason but another part are the system's incentives, people are incentivized to keep their code private because it is a source of funding for them.
They probably wouldn't get paid for it. Software piracy of EDA tools is rampant in China. I have not had an issue getting snippets of EDA code (to see how a model is implemented) as long as I sign an NDA.
"...a framework for the numerical simulation of partial differential equations using arbitrary unstructured discretizations on serial and parallel platforms... support for adaptive mesh refinement... supports 1D, 2D, and 3D steady and transient simulations on a variety of popular geometric and finite element types."
When I studied mechanical engineering we had one semester about finite elements. It was really fascinating to take some simple physical rules, set up some matrices, calculate them and have the result then be reproducible in the real world. It drove home to me the power of computing.
One of the ways I play around with programming languages is try to make a small Finite Element solver. I think so far I've made them in Matlab, Python, JavaScript, and Rust. Maybe I made one in C or C++ as well.
Whenever I do, I realize that a lot of the barrier is getting good Matrix solving tools in that language. Rust was a bit difficult because I was forced to choose between 1 or 2 main linear algebra codes. However, these codes are not optimized and will not be as fast as the Fortran based codes that a lot of the Python libraries now reference (LAPACK and BLAS if I rembember correctly).
It comes down to how you store the information in the Matrix. Physical problems solvable by FEM are symmetric and linearly independent, or some tricks are done to make them so. They also end up being quite Sparse. Those Sparse solvers from LAPACK and BLAS, as well as all of the past optimizations that are probably made for dealing with cache size (I'm speculating here, I really don't know), really make a huge difference in the problem.
Last I touched the Rust libraries to make a simple code, I just stuck with non-sparse solvers. There's an enormous difference between storage size and computational effort to go through a non-sparse matrix in comparison, especially when the problem is large. I think past a few million degrees of freedom on typical mechanical engineering problems, it started to become unmanageable on a local cpu.