LIMITED OFFER
Save 50% on book bundles
Immediately download your ebook while waiting for your print delivery. No promo code needed.
Probability for Deep Learning Quantum provides readers with the first book to address probabilistic methods in the deep learning environment and the quantum technological area simu… Read more
LIMITED OFFER
Immediately download your ebook while waiting for your print delivery. No promo code needed.
Probability for Deep Learning Quantum provides readers with the first book to address probabilistic methods in the deep learning environment and the quantum technological area simultaneously, by using a common platform: the Many-Sorted Algebra (MSA) view. While machine learning is created with a foundation of probability, probability is at the heart of quantum physics as well. It is the cornerstone in quantum applications. These applications include quantum measuring, quantum information theory, quantum communication theory, quantum sensing, quantum signal processing, quantum computing, quantum cryptography, and quantum machine learning. Although some of the probabilistic methods differ in machine learning disciplines from those in the quantum technologies, many techniques are very similar.
Probability is introduced in the text rigorously, in Komogorov’s vision. It is however, slightly modified by developing the theory in a Many-Sorted Algebra setting. This algebraic construct is also used in showing the shared structures underlying much of both machine learning and quantum theory. Both deep learning and quantum technologies have several probabilistic and stochastic methods in common. These methods are described and illustrated using numerous examples within the text. Concepts in entropy are provided from a Shannon as well as a von-Neumann view. Singular value decomposition is applied in machine learning as a basic tool and presented in the Schmidt decomposition. Besides the in-common methods, Born’s rule as well as positive operator valued measures are described and illustrated, along with quasi-probabilities. Author Charles R. Giardina provides clear and concise explanations, accompanied by insightful and thought-provoking visualizations, to deepen your understanding and enable you to apply the concepts to real-world scenarios.
CG
Charles R. Giardina was born in the Bronx, NY, on December 29, 1942. He received the B.S. degree in mathematics from Fairleigh Dickinson University, Rutherford, NJ, and the M.S. degree in mathematics from Carnegie Institute of Technology, Pittsburgh, PA. He also received the M.E.E. degree in 1969, and the Ph.D. degree in mathematics and electrical engineering in 1970 from Stevens Institute of Technology, Hoboken, NJ. Dr. Giardina was Professor of Mathematics, Electrical Engineering, and Computer Science at Fairleigh Dickinson University from 1965 to 1982. From 1982 to 1986, he was a Professor at the Stevens Institute of Technology. From 1986 to 1996, he was a Professor at the College of Staten Island, City University of New York. From 1996, he was with Bell Telephone Laboratories, Whippany, NJ, USA. His research interests include digital signal and image processing, pattern recognition, artificial intelligence, and the constructive theory of functions. Dr. Giardina has authored numerous papers in these areas, and several books including, Mathematical Models for Artificial Intelligence and Autonomous Systems, Prentice Hall; Matrix Structure Image Processing, Prentice Hall; Parallel Digital Signal Processing: A Unified Signal Algebra Approach, Regency; Morphological Methods in Image and Signal Processing, Prentice Hall; Image Processing – Continuous to Discrete: Geometric, Transform, and Statistical Methods, Prentice Hall; and A Unified Signal Algebra Approach to Two-Dimensional Parallel Digital Signal Processing, Chapman and Hall/CRC Press.