A long Economist magazine article on AI examines the pros and cons of artificial intelligence. It said:
Some worry that the technology’s heedless spread will further concentrate economic and political power, up-end swathes of the economy in ways which require some redress even if they offer net benefits and embed unexamined biases ever deeper into the automated workings of society. There are also perennial worries about models “going rogue” in some way as they get larger and larger. “We’re building a supercar before we have invented the steering wheel,” warns Ian Hogarth, a British entrepreneur and co-author of the “State of AI”, a widely read annual report.
To understand why foundation models represent a “phase change in AI,” in the words of Fei-Fei Li, the co-director of Stanford University’s Institute for Human-Centred AI, it helps to get a sense of how they differ from what went before.
All modern machine-learning models are based on “neural networks”—programming which mimics the ways in which brain cells interact with each other. Their parameters describe the weights of the connections between these virtual neurons, weights the models develop through trial and error as they are “trained” to respond to specific inputs with the sort of outputs their designers want.
For decades neural nets were interesting in principle but not much use in practice. The ai breakthrough of the late 2000s/early 2010s came about because computers had become powerful enough to run large ones and the internet provided the huge amounts of training data such networks required. Pictures labelled as containing cats being used to train a model to recognise the animals was a canonical example. The systems created in this way could do things that no programs had ever managed before, such as provide rough translations of text, reliably interpret spoken commands and recognise the same face when seen in different pictures.