Artificial Intelligence (AI) is the largest new technology affecting everyone, even language teachers. It is not even on the horizon. It is already here. So I was happy to discover a new podcast, Hard Fork, from the New York Times, which had their second episode about Generative AI.
It is an interview with Emad Mostaque, the founder of Stable Diffusion, a Generative AI engine. But in almost half of the podcast columnists Kevin Roose (NYTimes tech writer) and Casey Newton (Verge Contributing Editor and newsletter Platformer author) get us up to speed with examples and explanations that help us with the interview.
There are many different kinds of AI. For a quick jargon-filled summary you can see there are multiple dimensions of history, power, use, and function. But we will save that for another post.
Relevant here is that Generative AI is different from Analytic or Discriminative AI in that it studies millions of examples not to just categorize, recognize, or analyze, but to create. The most popular example now is DALL-E 2, which is text-to-image, meaning you write something and DALL-E 2 does not just interpret your text, it creates an image (or images) with it.
The podcast summarizes how this works. There is a lot of talk about business, but you can focus on the tech and how it works. I love the business model of Stable Diffusion, and sharing their engine with an Open Source license, but that too is for another post.
Most important here is how images are simply the next step up from text. Yes, AI has been generating text for a while already. Your students probably know about it. There is even a textbook out for high school students in Japan that uses AI to generate texts (more on another post).
When you finish, come back here and post a reply. I will need to approve your first reply, but then you can comment freely after that. I would love to hear what you think.