Apple is reportedly beefing up the generative artificial intelligence (AI) capabilities of its mobile devices.
To that end, the company has begun hiring for dozens of roles to work on large language models (LLMs), the Financial Times reported Sunday (Aug. 6).
The job postings say the tech giant is embarked on “ambitious long-term research projects that will impact the future of Apple, and our products.” The report notes that the lists suggest Apple is focused on bringing technologies like LLMs specifically to mobile, whereas competitors like Google have released AI products like chatbots.
“We view AI and machine learning as core fundamental technologies that are integral to virtually every product that we build,” Apple CEO Tim Cook said on an earnings call this week.
The FT report notes that the company’s third quarter spending on research and development was $3.1 billion larger than the same quarter in 2022, which Cook says is due in part to its generative AI efforts.
As PYMNTS wrote last week, LLM technology is taking AI “to new heights by expanding its capabilities beyond text to include images, speech, video and even music.”
And as companies build LLMs, they’re also facing the challenges of collecting and classifying vast amounts of data, while also understanding the complexities of how models now operate and how that is different from the past status quo.
“Technology giants such as Alphabet and Microsoft and investors such as Fusion Fund and Scale VC are investing in LLMs and forming partnerships,” PYMNTS wrote. “The technology companies’ and investors’ task is a big one. It includes ensuring their LLM protégés gather and train large data sets, called parameters, and fine-tune them so that they execute and generate desired outputs or results.”
And with all this development and investment comes regulation, as governments around the world try to get a better handle on AI.
Doing so might be easier said than done, University of Pennsylvania Law School Professor Cary Coglianese told PYMNTS recently.
“Trying to regulate AI is a little bit like trying to regulate air or water,” said the professor, who is also founding director of the Penn Program on Regulation.
True, those things are already regulated throughout the world, but — like AI — they have distinctive characteristics that require unique approaches to oversight. Coglianese argued that regulating AI will be a multifaceted activity that changes depending on the type of algorithm and how it is used.
“It’s not one static thing. Regulators — and I do mean that plural, we are going to need multiple regulators — they have to be agile, they have to be flexible, and they have to be vigilant,” he said, adding that “a single piece of legislation” won’t solve the problems connected to AI.