CUDA Fortran for Scientists and Engineers: Best Practices for Efficient CUDA Fortran Programming shows how high-performance application developers can leverage the power of GPUs using Fortran, the familiar language of scientific computing and supercomputer performance benchmarking. The authors presume no prior parallel computing experience, and cover the basics along with best practices for efficient GPU computing using CUDA Fortran. In order to add CUDA Fortran to existing Fortran codes, they explain how to understand the target GPU architecture, identify computationally-intensive parts of the code, and modify the code to manage the data and parallelism and optimize performance – all in Fortran, without having to rewrite in another language.Each concept is illustrated with actual examples so you can immediately evaluate the performance of your code in comparison.This second edition provides much needed updates on how to efficiently program GPUs in CUDA Fortran. It can be used either as a tutorial on GPU programming in CUDA Fortran as well as a reference text.
API Design for C++, Second Edition provides a comprehensive discussion of Application Programming Interface (API) development, from initial design through implementation, testing, documentation, release, versioning, maintenance, and deprecation. It is the only book that teaches the strategies of C++ API development, including interface design, versioning, scripting, and plug-in extensibility. Drawing from the author's experience on large scale, collaborative software projects, the text offers practical techniques of API design that produce robust code for the long-term. It presents patterns and practices that provide real value to individual developers as well as organizations.The Second Edition includes all new material fully updated for the latest versions of C++, including a new chapter on concurrency and multithreading, as well as a new chapter discussing how Objective C++ and C++ code can co-exist and how a C++ API can be accessed from Swift programs. In addition, it explores often overlooked issues, both technical and non-technical, contributing to successful design decisions that produce high quality, robust, and long-lived APIs. It focuses on various API styles and patterns that will allow you to produce elegant and durable libraries. A discussion on testing strategies concentrates on automated API testing techniques rather than attempting to include end-user application testing techniques such as GUI testing, system testing, or manual testing.
Theory of Structured Parallel Programming is a comprehensive guide to structured parallel programming corresponding to traditional structured sequential programming. The book provides readers with comprehensive coverage of theoretical foundations of structured parallel programming, including analyses of parallelism and concurrency, truly concurrent process algebras, building block-based structured parallel programming, modelling and verification of parallel programming language, modelling and verification of parallel programming patterns, as well as modeling and verification of distributed systems.There have been always two ways to approach parallel computing: one is the structured way, and the other is the graph-based (true concurrent) way. The structured way is often based on the interleaving semantics, such as process algebra CCS. Since the parallelism in interleaving semantics is not a fundamental computational pattern (the parallel operator can be replaced by alternative composition and sequential composition), the parallel operator often does not occur as an explicit operator, such as in the mainstream programming languages C, C++, Java, et al.
Artificial Intelligence and Machine Learning for Open-world Novelty, Volume 134 in the Advances in Computers series presents innovations in computer hardware, software, theory, design and applications, with this updated volume including new chapters on AI and Machine Learning for Real-world problems, Graph Neural Network for learning complex problems, Adaptive Software platform architecture for Aerial Vehicle Safety Levels in real-world applications, OODA Loop for Learning Open-world Novelty Problems, Privacy-Aware Crowd Counting Methods for Real-World Environment, AI and Machine Learning for 3D Computer Vision Applications in Open-world, and PIM Hardware accelerators for real-world problems.Other sections cover Irregular Situations in Real-World Intelligent Systems, Offline Reinforcement Learning Methods for Real-world Problems, Addressing Uncertainty Challenges for Autonomous Driving in Real-World Environments, and more.
Internet of Things: Architectures for Enhanced Living Environments, Volume 133 presents interesting chapters on a variety of timely topics, including Explainable Artificial Intelligence for Enhanced Living Environments: A Study on User Perspective, Human behavioral anomaly pattern mining within an IoT environment: an exploratory study, Indoor Activity Localization Technologies for Assisted Living: Opportunities, Challenges, and Future Directions, Smart Indoor Air Quality Monitoring for Enhanced Living Environments and Ambient Assisted Living, Usability evaluation for the IoT use in Enhanced Living Environments, Roadmap to the elderly enhanced living and care environments: applications and challenges on the Internet of Things domain, and much more.
Handbook of Truly Concurrent Process Algebra provides readers with a detailed and in-depth explanation of the algebra used for concurrent computing. This complete handbook is divided into five Parts: Algebraic Theory for Reversible Computing, Probabilistic Process Algebra for True Concurrency, Actors – A Process Algebra-Based Approach, Secure Process Algebra, and Verification of Patterns. The author demonstrates actor models which are captured using the following characteristics: Concurrency, Asynchrony, Uniqueness, Concentration, Communication Dependency, Abstraction, and Persistence. Every pattern is detailed according to a regular format to be understood and utilized easily, which includes introduction to a pattern and its verifications.Patterns of the vertical domains are also provided, including the domains of networked objects and resource management. To help readers develop and implement the software patterns scientifically, the pattern languages are also presented.
Embedded Systems: ARM Programming and Optimization, Second Edition combines an exploration of the ARM architecture with an examination of the facilities offered by the Linux operating system to explain how various features of program design can influence processor performance. The book demonstrates methods by which a programmer can optimize program code in a way that does not impact its behavior but instead improves its performance. Several applications, including image transformations, fractal generation, image convolution, computer vision tasks, and now machine learning are used to describe and demonstrate these methods. From this, the reader will gain insight into computer architecture and application design, as well as practical knowledge in embedded software design for modern embedded systems. The second edition has been expanded to include more topics of interest to upper level undergraduate courses in embedded systems.
DNA or Deoxyribonucleic Acid computing is an emerging branch of computing that uses DNA sequence, biochemistry, and hardware for encoding genetic information in computers. Here, information is represented by using the four genetic alphabets or DNA bases, namely A (Adenine), G (Guanine), C (Cytosine), and T (Thymine), instead of the binary representation (1 and 0) used by traditional computers. This is achieved because short DNA molecules of any arbitrary sequence of A, G, C, and T can be synthesized to order. DNA computing is mainly popular for three reasons: (i) speed (ii) minimal storage requirements, and (iii) minimal power requirements. There are many applications of DNA computing in the field of computer science. Nowadays, DNA computing is widely used in cryptography for achieving a strong security technique, so that unauthorized users are unable to retrieve the original data content. In DNA-based encryption, data are encrypted by using DNA bases (A, T, G, and C) instead of 0 and 1. As four DNA bases are used in the encryption process, DNA computing supports more randomness and makes it more complex for attackers or malicious users to hack the data. DNA computing is also used for data storage because a large number of data items can be stored inside the condensed volume. One gram of DNA holds approx DNA bases or approx 700 TB. However, it takes approx 233 hard disks to store the same data on 3 TB hard disks, and the weight of all these hard disks can be approx 151 kilos. In a cloud environment, the Data Owner (DO) stores their confidential encrypted data outside of their own domain, which attracts many attackers and hackers. DNA computing can be one of the best solutions to protect the data of a cloud server. Here, the DO can use DNA bases to encrypt the data by generating a long DNA sequence. Another application of DNA computing is in Wireless Sensor Network (WSN). Many researchers are trying to improve the security of WSN by using DNA computing. Here, DNA cryptography is used along with Secure Socket Layer (SSL) that supports a secure medium to exchange information. However, recent research shows some limitations of DNA computing. One of the critical issues is that DNA cryptography does not have a strong mathematical background like other cryptographic systems. This edited book is being planned to bring forth all the information of DNA computing. Along with the research gaps in the currently available books/literature, this edited book presents many applications of DNA computing in the fields of computer science. Moreover, research challenges and future work directions in DNA computing are also provided in this edited book.
Principles of Big Graph: In-depth Insight, Volume 128 in the Advances in Computer series, highlights new advances in the field with this new volume presenting interesting chapters on a variety of topics, including CESDAM: Centered subgraph data matrix for large graph representation, Bivariate, cluster and suitability analysis of NoSQL Solutions for big graph applications, An empirical investigation on Big Graph using deep learning, Analyzing correlation between quality and accuracy of graph clustering, geneBF: Filtering protein-coded gene graph data using bloom filter, Processing large graphs with an alternative representation, MapReduce based convolutional graph neural networks: A comprehensive review. Fast exact triangle counting in large graphs using SIMD acceleration, A comprehensive investigation on attack graphs, Qubit representation of a binary tree and its operations in quantum computation, Modified ML-KNN: Role of similarity measures and nearest neighbor configuration in multi label text classification on big social network graph data, Big graph based online learning through social networks, Community detection in large-scale real-world networks, Power rank: An interactive web page ranking algorithm, GA based energy efficient modelling of a wireless sensor network, The major challenges of big graph and their solutions: A review, and An investigation on socio-cyber crime graph.
*Textbook and Academic Authors Association (TAA) Textbook Excellence Award Winner, 2024*Engineering a Compiler, Third Edition covers the latest developments in compiler technology, with new chapters focusing on semantic elaboration (the problems that arise in generating code from the ad-hoc syntax-directed translation schemes in a generated parser), on runtime support for naming and addressability, and on code shape for expressions, assignments and control-structures. Leading educators and researchers, Keith Cooper and Linda Torczon, have revised this popular text with a fresh approach to learning important techniques for constructing a modern compiler, combining basic principles with pragmatic insights from their own experience building state-of-the-art compilers.