Regardless of how “multi” they are, traditional CPU cores are not nearly efficient enough at running the massive data models researchers are now building, especially in intensive, artificial intelligence (AI)-enabled applications like natural language processing (NLP), image processing and recommender systems.
We need a new architecture better suited to current and future artificial intelligence and machine learning applications — and we need it now.
AI is steadily but surely eating software. ML applications are developed to achieve outcomes that are probabilistic rather than deterministic. They don’t need the nth degree of high precision — they need high-enough probability certainty to determine that an image of a car is indeed a car. The reduced need for precision means far more of a chip’s transistors can be used for the important work of computation rather than keeping multiple cores in sync.
High-performance, flexible solutions will be necessary for continual disruption and an important democratizing factor, allowing unexpected underdogs to become the AI heroes of the future without having to go head-to-head with a FAANG or tech-savvy nation-state. Multicore kept the industry afloat for many years, and it will still be important in many applications. But for a future dominated by compute-hungry, probabilistic, AI-powered applications, we’ll need some truly clever solutions. RDA is one answer, and chances are we’ll need even more.
The original article can be found here.
According to Artificial Intelligence World Society Innovation Network (AIWS.net), AI can be an important tool tfor helping people achieve well-being and happiness, relieve them of resource constraints and arbitrary/inflexible rules and processes. In this effort, Michael Dukakis Institute for Leadership and Innovation (MDI) invites participation and collaboration with think tanks, universities, non-profits, firms, and other entities that share its commitment to the constructive and development of full-scale AI for world society.