Avoid real time computation
“The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble. It therefore becomes desirable that approximate practical methods of applying quantum mechanics should be developed, which can lead to an explanation of the main features of complex atomic systems without too much computation” — Paul Dirac, 1929. For example:
- Navier–Stokes equations are the fundamental basis of almost all Computational Fluid Dynamics (CFD) problems. These are extremely useful to model the weather, airflow around an airplane wing, ocean currents, water flow in a pipe, and the analysis of pollution. But these cannot be used in real-time due to the time they need for computation.
- Fully describing an arbitrary many-body state in quantum mechanics requires an exponential amount of information. While simulating a quantum system with 30 qubits requires just tens of gigabytes, simulating 300 qubits requires more bytes than the number of atoms in the observable universe! Even the state-of-the-art approximation methods for quantum mechanics such as Hartree-Fock Theory (HF) and Density Functional Theory (DFT) can take a long time: hours, days, or even weeks to compute. Learn more about QuBits.
- Even with the computing power available today, simulations or real-time analysis for acoustic transmission or absorptions in buildings can take a long time.
- Computer simulation of electrons in the potential of atomic nuclei is the workhorse of modeling material properties such as phase stability, mechanical behavior, and thermal conductivity. However, these simulations are limited by their computational cost.
Deep learning has the potential to break through this limitation.
Imagine creating a deep learning model by constructing a dataset that covers the entire physically relevant set of configurations for the problem and then just using the model to completely bypass costly calculations in the future.
You can use the predictive power of deep neural networks to cut the computation time down to a couple of seconds.
Yes, you will do complex simulations once, but only once! Once you have built your dataset, decided the fitting methodology such as a simple FFN (feed forward neural network), RBM (a restricted-Boltzmann-machine) or any other neural network architecture, your model can serve as a template for all future work!
To me, this is one of the best uses of deep learning to build an explanation without too much computation!
People often think of AI for boosting growth by substituting humans, but actually, huge value is going to come from how humans will use AI. This is yet another perfect example how deep learning will help us advance more.
By Ajay Malik
AJ Ex-Google, Ajay Malik is CTO of Lunera and building a fog cloud by embedding a compute platform within end cap of bulbs and tube lights. With over 25 years of executive engineering leadership and entrepreneurial experience in delivering award-winning innovative products, Ajay joined Lunera from Google, where he was head of architecture and engineering for the worldwide corporate networking business. Ajay has also held executive leadership positions at Hewlett-Packard, Cisco, and Motorola. Ajay has 80+ patents pending/approved. Ajay is the author of “RTLS for Dummies” and “Artificial Intelligence for Wireless Networking.”