Devorexa

Archives May 2023

Mojo a new programming language for all AI developers

Modular, an AI infrastructure company, has unveiled a new programming language called Mojo for AI developers, combining full Python compatibility with advanced low-level programming features and the ability to harness GPUs and other AI accelerators. The language uses MLIR, the Multi-Level Intermediate Representation compiler framework, to provide low-level systems programming and advanced compilation features.

Mojo was designed to bridge the gap between research and production, leveraging Python syntax as well as systems programming and compile-time metaprogramming. Modular claims that Mojo is faster than C++, more hackable than Nvidia’s CUDA, and as safe as Rust. The language aims to offer an innovative programming model for machine learning accelerators while supporting general-purpose programming.

Mojo is intended to be a superset of Python and is fully compatible with existing Python programs. The language supports Python core features such as async/await, error handling, and variadics, with the exception of classes, which are not yet supported. The goals of the language include:

  • Full compatibility with the Python ecosystem
  • Predictable low-level performance and control
  • The ability to deploy code subsets to accelerators
  • Avoidance of ecosystem fragmentation

The Mojo standard library, compiler, and runtime are not yet available for local development. However, Modular has provided a hosted development environment called the Mojo Playground for developers to test and experiment with the language, requiring sign-up for access.

Mojo’s roadmap includes features such as tuple support, keyword arguments in functions, improved package management support, and standard library features like canonical arrays and dictionary types. Additionally, the language aims to provide full support for dynamic features in Python classes and C/C++ interoperability.

Mojo represents an exciting development for AI developers looking for a high-performance and flexible tool to accelerate their research and production, while leveraging the strengths of Python and advanced low-level programming features.

LANGUAGES TIME (S) * SPEEDUP VS PYTHON
PYTHON 3.10.9 1027 s 1x
PYPY 46.1 s 22x
SCALAR C++ 0.20 s 5000x
MOJO 🔥 0.03 s 35000x

*Algorithm – Mandelbrot | Instance – AWS r7iz.metal-16xl – Intel Xeon

Godfather of AI’ quits Google, Warns About the Dangers of AI

‘I don’t think they should scale this up more until they have understood whether they can control it,’ Geoffrey Hinton says.

Geoffrey Hinton, the “godfather of AI” who worked at Google for more than a decade, has quit the company, citing concerns about the dangers of artificial intelligence. Hinton developed the technology that paved the way for current AI systems, including ChatGPT, but he now regrets his contributions to the field.

In an interview with The New York Times, Hinton warned that in the short term, AI could lead to the proliferation of fake images, videos, and text, making it difficult for people to discern what is true. However, in the long term, he believes that AI systems could eventually learn unexpected, dangerous behavior, which could lead to the development of killer robots.

Hinton also expressed concerns about the impact of AI on the labor market and called for regulation to ensure that companies like Google and Microsoft do not get locked into a dangerous race. He suggested that these companies might already be working on dangerous systems in secret and should not scale up AI until they can control it.

Hinton is not the only AI expert to warn about the dangers of the technology. In recent months, two major open letters have warned about the “profound risks to society and humanity” posed by AI, signed by many of the people who helped create it.

Hinton’s concerns about AI have grown over the past year, as the technology has advanced rapidly with the development of systems such as OpenAI’s ChatGPT and Google’s Bard. He believes that these systems are beginning to behave in ways that are not possible in the human brain and that they will become even more dangerous as companies further refine and train their AI systems.

As AI continues to evolve, Hinton warns that it is essential to understand its capabilities and ensure that it is used responsibly to benefit society.