In his July 2024 book Mastering AI, Fortune AI Editor Jeremy Kahn makes the case that whether artificial intelligence (AI) offers more promise or problems depends greatly on how we create, implement, use, regulate, and control it. Kahn details AI’s history and those behind it, its current state (already out of date given how rapidly AI is advancing), ways it will increasingly influence and impact humans by way of a broad swath of industries, and potential nearer and farther future scenarios, all in a well-researched, very accessible, and thought-provoking manner. To keep up with Kahn’s current writing on AI, subscribe to Fortune’s free Eye on AI newsletter. To learn more about how current generative AI works, which publishers allow their content to be used to train it, and much more, consider asking your favorite AI system (mine are Claude and Microsoft Copilot because of how they are trained).
I recommend Mastering AI highly as a comprehensive look at AI with useful information for all types of libraries. Continuing from my observations from February, May, and June 2024, here are some of my takeaways from Mastering AI. Kahn does a great job balancing potential benefits and sobering risks, as well as ethics around AI from perspectives far too numerous to list here.
1. Though ChatGPT has attracted enormous attention and use since its release just two years ago in November 2022, AI’s roots go back to Alan Turing’s late-1940s work on how computers could mimic human intelligence. Kahn brings us from Turing to more recent development of machine learning and neural networks, including the pioneering work of 2024 Nobel Prize winners Geoffrey Hinton (the “godfather of AI” at the University of Toronto) and Demis Hassabis (Google DeepMind), and how Google, Microsoft, and others have invested billions in AI development through acquisitions and business deals with OpenAI and others.
2. AI can be an “ultimate intellectual technology” that extends and supports our mental powers and changes how and what we think like the map, clock, book, computer, camera, and other previous technologies did. It can analyze massive amounts of data, find patterns, and predict, simulate, count, and categorize things much faster than we can (e.g., analyzing weather patterns, predicting protein structures, and analyzing entire genomes). We can team with it to review, edit, and apply human morals, values, lived experience, and judgment that AI doesn’t have, to use what it tells us with and for other humans through science, medicine, and other fields. An important question is exactly whose judgment, etc. we are applying to use AI-generated information.
3. AI’s results are only as good as the data we use to train it, as influenced by historical biases and stereotypes. Again, which morals and values impact how data are created, and how and for what we should use AI? AI doesn’t know what it is and isn’t supposed to be used to do. Some systems like Claude use constitutional AI principles, which are still less reliable than a person with written rules. Some companies are using reinforcement learning from human feedback (RLHF) to guide AI to give appropriate and helpful answers, and some outsource or offshore that work as with call centers. As we know as librarians, labeling data is extremely important for how automated systems function, and that work often requires human judgment and is being offshored also. What are the implications, especially lacking regulation and law around AI and labor in many countries including the U.S.? In some fields such as medicine, data about Black people and women are underrepresented. How can we increase representation in data to increase AI’s accuracy and prevent it from reinforcing past biases around racism, sexism, and other prejudices?
4. Humans and AI will need to work together for the foreseeable future in a “centaur” approach where we and AI each bring strengths to a task. How can we avoid automation bias, over-trusting or over-relying on AI in ways that allow us to not have to remember as much, build or retain expertise, or make decisions? What might that mean for our own critical thinking skills? What happens when AI companies competing fiercely to reach artificial general intelligence (AGI) and artificial superintelligence (ASI) actually achieve it? Which tasks will humans stop or start doing, and what will be the impacts on labor in healthcare, manufacturing, education, law, lab science, and even the arts? How can AI grow productivity, and for which workers? Will as many as 1/3 of jobs globally be automated by the mid-2030s, and what new jobs might arise and where, following from previous historical periods like the Industrial Revolution? How will humans continue correcting the inevitable mistakes that AI makes?
5. Kahn sees AI moving from current assistants to a personal agent or “copilot” that can help with tasks as the next big thing. This could be a personal tutor (such as OpenAI and Khan Academy’s Khanmigo) that gets to know a student over their K-12 years and guides the student in applying to college or toward a career path, or helps draft content the student can proof and fact-check. AI can help a teacher change how they engage with students to assess learning, and could be a learning analytics assistant to aggregate data about how well students understand and complete assignments and support timely interventions. Though AI can’t deliver experiences like higher education can, will some students consider AI a substitute for higher education? Not all schools and students can afford the technology these scenarios require – what policies do we need to create equitable access? How might AI actually be a tool for greater equity for students with different levels of resources and language skills? As companies use copilots increasingly, how can they guard against human biases influencing AI further (which may drive further inequity)?
6. Unlike the government-created Internet that makes AI use possible, AI is being developed by the for-profit private sector absent robust government regulation. Because it is built to mimic human intelligence, it is inherently deceptive and has learned deception through the training data we give it. While Kahn asserts that people should always be informed when they’re interacting with AI software, OpenAI’s opaque model for ChatGPT shields it from scrutiny by regulators and the public. It also helps hide the use of copyrighted content without consent, and many are concerned about the amount of copyrighted content in many large language model (LLM) training data. Current law is unclear about what is fair use or “fair learning” with respect to AI and copyrighted content, and whether AI-generated content is copyrightable. How should rights and fees work in the age of AI? We need companies to reveal how AI has been trained and allow independent audit and testing, and for AI systems to inform users completely. This will require government action. Following from current AI-powered cyberfraud techniques including deepfakes, Zoom face-swapping, voice cloning, and composing phishing emails that include malware, unregulated AI could become capable of more sophisticated financial and other fraud. Kahn suggests that cybersecurity experts could use AI to find and fix vulnerabilities before bad actors can find and exploit them.
7. Geopolitics around AI are complicated to say the least. Some believe that whoever controls AI will be able to control the world. Current leading AI chipmaker Taiwan Semiconductor Manufacturing Company’s (TSMC) main base in Taiwan is under ongoing threat of invasion from China, and their Arizona plant is now producing comparable output. The U.S. CHIPS and Science Act is expected to foster more domestic chipmaking. Hyperscalers such as Meta are working to develop their own AI chips.
8. As I’ve alluded to, AI comes with many paradoxes. It may free us up from many current tasks, and it may hinder some of the very things that make us human. It may make some jobs obsolete, and create new ones we haven’t conceived of yet. It may unlock the ability to synthesize proteins and develop new drugs faster than ever before, and it may also unlock the ability to create (as well as neutralize) bioweapons. From training to operation and use, AI is extremely resource, energy- and water-consumptive given the high-performance computer chips it needs and the cooling AI server data centers need. It also may lead us to new solutions to climate change.
To Kahn, the dangers that AI poses to our intelligence, ability to write, think, empathize and make sound judgments are perhaps its most dire circumstances, and yet, he considers AI’s potential benefits to society worth the risks. He suggests we prepare by developing political will and policy solutions to guide how AI is created, used, and managed. We know that AI advances every day and will be increasingly prevalent in society with time. I hope Kahn is correct in saying that businesses that maintain a feedback loop between people, data, and AI will be the most successful in the future.
If you’re interested in AI, I hope you’ll read this excellent book. I’ll close with just a few more questions for you. How are you using AI today at work and/or life in general? As MCLS considers how we use AI as an organization, what matters most to you as a member library? What are your favorite resources on AI and libraries, and why? If you’ve read Mastering AI, what are your key takeaways? What future AI topics might I cover in this space? Let me know, at garrisons@mcls.org.