Global Column | Why You Need an Application, Not an LLM

One sign that AI is still in its early stages is that there’s still a lot for users to figure out. Jono Bacon, founder of Community Leadership Core, laments that the act of “selecting between LLM models” to run a query is “complicated and confusing for most people.” And once you’ve chosen the “right” model (whatever that means), you have to do all sorts of work to make sure the model returns relevant results. Forget about getting consistent results—that’s not something LLMs currently have.

ⓒ Getty Images Bank

That said, when I asked RedMonk co-founder James Governor if AI/generative AI had lost its luster, he replied with a resounding “no.” I think we may be in a swamp of disillusionment right now, but that’s because every major new technology follows the same timeline that moves from indifference to adoration to ridicule to general adoption. Some software developers are already jumping into the latter stages, but for others, it will take longer.

In the end, what matters is consistency

It’s long been clear that AI will take time to get its act together. As Governor puts it, “Most AI art tends to be kitschy,” and before you know it, you have to tinker with something like Midjourney to create an image. Is it because computers don’t know what good art is? As ardent AI critic Grady Booch points out, we sometimes think that AI can reason and think, but that’s not true. “Human thought and human understanding are not simple statistical processes like LLM, and to claim that is is a serious misunderstanding of the sophisticated uniqueness of human cognition.”

People are not like LLMs or machines. AI cannot paint like Van Gogh. It cannot write like Woolf. It can imitate, but it will never reach the level of human cognitive ability.

That’s not to say it’s not useful. For example, I recently asked my friends for their opinions on a tricky business problem. I fed their various unstructured responses into ChatGPT and asked it to summarize them. The results were surprisingly good. I double-checked to make sure I wasn’t simply repeating what one of the human respondents had said. No. Summarizing seems like something machines can do pretty well.

Generative AI can also be quite useful to software developers without threatening the core of their work. I propose this based on the excellent idea of ​​Honeycomb CTO Charity Majors that coding is the least interesting part of what software developers do. As Kelsey Hightower argues, “Writing code should be the last thing a developer does.” Instead, generative AI can help developers avoid obstacles, fill in boilerplate code, and figure out how something might look in an alternative language.

Today, generative AI and AI are being used very usefully. However, the methods of using them are still too difficult.

Turning AI into Applications

Let’s go back to Jono Bacon’s story. Bacon is asking, “Don’t make me choose a model. Choose one for me at my request.” He’s asking for the really cumbersome and complex AI infrastructure to be turned into an application. We’re starting to see this phenomenon in Apple Intelligence, Google Search, etc. Companies are embedding AI in their applications so that users don’t have to do the undifferentiated heavy lifting of choosing infrastructure and generating prompts.

This is essential, given that generative AI still requires a lot of manual work to deliver actionable results. Dan Price points out that “you need to provide all the context that the model needs to answer the question.” The only way to know which context is most helpful for getting somewhat consistent results is to “play with the model.” Now, application vendors are going to do that for you.

Price then says, “It’s better to break down a complex task into smaller subtasks that are completed over several conversations rather than trying to complete it all at once with one complex initial command.” Again, the application vendor should do this for the user. “Since you’re interacting with the entire human overlap, defining specific personas that can help you with the task is a better way to do it,” says Cristiano Giardina. Why should the user do this? Let the application vendor do it.

We can go a little further. The point is that in the early days of AI, we will continue to expect mainstream users to be able to do all the work of understanding and manipulating the still-crude LLM. Just as compiling Linux for servers was not the “job” of mainstream companies, this is not the job of mainstream users. Companies like Red Hat have emerged to package Linux distributions for the mass market. We will soon need to do the same for generative AI. That will dramatically increase adoption and productivity.
editor@itworld.co.kr

Source: www.itworld.co.kr