Ted Lappas

The last few episodes of the London Futurists Podcast have explored what GPT (generative pre-trained transformer) technology is and how it works, and also the call for a pause in the development of advanced AI. In the latest episode, Ted Lappas, a data scientist and academic, helps us to understand what GPT technology can do for each of us individually.

Lappas is Assistant Professor at Athens University of Economics and Business, and he also works at Satalia, which was London’s largest independent AI consultancy before it was acquired last year by the media giant WPP.

Head start

Lappas uses GPTs for pretty much every task that involves generating or manipulating text. This includes drafting emails, writing introductions to book chapters, summarising other people’s written work, and re-writing text. He also uses it to write computer code, something that GPT-4 is much better at than its predecessors. The main value in all of these use cases is to provide a head start. It gets past any terror of the blank page with a rough first draft – and in many cases the first draft is not all that rough, it’s maybe 70% of the way to final.

What is slowing GPTs down?

Given this formidable level of usefulness, why has GPT-4 not turned the world of work upside-down, at least for knowledge workers? Lappas thinks there are two reasons. The first is the fear that any data people are working with may find its way into the hands of competitors. The second reason is that before a mature ecosystem of plug-ins develops, GPT is just a brain: it cannot interact with the world, or even with the internet, which reduces its usefulness.

Plug-ins from the likes of Expedia and Instacart are changing this, as will systems like AutoGPT and BabyAGI, which have the ability to connect with other systems and apps. (AutoGPT and its like are hard to use and frustrating at the moment, but that will improve.) This unfolding ecosystem of extensions has been compared with the development of the iPhone app store, which made Apple’s smartphone an invaluable tool. It is much easier to create a plug-in for GPT-4 than it was to create an app for the iPhone: all it takes is 30 minutes and a basic grasp of the Python programming language.

Specific use cases

Lappas gives a specific example of how he uses GPT-4 in his work. He is a reviewer for an academic journal, and for each issue he is allocated around 15 papers. This means he has to read and summarise the papers themselves, plus around five reviews of each paper, and the “rebuttal” (responses) by the paper’s authors. This adds up to around 60 documents. There is no way to avoid reading them all, but GPT-4 makes it dramatically easier and faster for him to produce the summaries. He says it makes him more like a conductor of an orchestra than a writer, and it saves him five or six hours on each issue of the journal.

GPT-4 and marketing copy

Another field where Lappas is aware of GPT-4 making waves is the writing of marketing copy. He lives and works in Athens, and several of his friends there write posts for Amazon and other websites. If they are productive, they can churn out fifteen a day. Some are now using GPT-4 to produce the initial drafts, and some are not. Those who are not are starting to fall behind. He thinks that if he is seeing this happening in Greece, which is not a particularly technophile country, then it must be happening elsewhere too.

Websites like Fiverr and Upwork are forums for companies needing copy to advertise projects for freelance copywriters to bid for. These sites are seeing an increase in the number of pitches for each piece of work, and the suggestion is that freelancers are using GPT-4 to increase the amount of projects they can bid for and fulfil. Unfortunately the quality of the resulting work is not always high – it hasn’t been edited thoroughly – and clients often warn that copy produced by machines will not be accepted. After all, if a machine can produce the copy then the client company could instruct GPT-4 themselves, rather than paying a freelancer to do it.

Higher up the value chain are copy writers who generate their content through bespoke interviews with the personnel of the client, or people suggested by the client, and then transform those discussions into sparkling English. Reports suggest that GPT-4 is making fewer inroads into this level of the market, although that will probably change in time. Lappas reports that his colleagues at WPP are acutely interested in how quickly this march up the value chain will happen.

GPT and travel writing

One of the ways this will happen is that the AI models can be fine-tuned by ingesting samples of work by an experienced copy writer. I do this to help with my “Exploring” series of illustrated travel books. On the GPT-4 platform Chatbase, I trained a new bot by feeding it the contents of six of my previous books. In the settings, I specify the parameters for the book in general, for instance telling the bot to avoid flowery language with lots of adjectives, and to avoid impressionistic introductions and conclusions.

I then write a tailored prompt for each new chapter of the book I am currently working on. Churning out copy at several hundred words a minute, the bot does a reasonable job of imitating my writing style, although I still have to do a substantial amount of editing and fact checking, both to improve the readability, and also to weed out the factual errors and the so-called “hallucinations” – the apparently random inventions it includes.

Exploring Marseille

Advice for the curious and the nervous

Lappas strongly advises everyone who wants to understand what GPTs are capable of now and what they will be capable of in future to play with the models. He also urges us all to invest the time and effort to learn a bit of Python, which will make a wide range of tools available. He is seeing some evidence that people are taking this advice in the rapidly growing number of TikTok videos sharing GPT-based success stories. He notes that GPT-4 itself can actually help you learn how to use the technology, which means it is now easier to learn Python than it was last year, and it is also more worthwhile.

Even if the idea of doing any coding at all is abhorrent, it is still worth playing with GPT-4, and it is worth using it on a project that has legs. It costs £20 a month to get access to GPT-4, so you might as well get your money’s worth. Diving into the model with no clear goal in mind may leave you relatively unimpressed. It is when the model helps you achieve a task which would otherwise have taken a couple of hours or more that you realise its power.

Generating images

GPT-4 was preceded by Dall-E, Midjourney, and many other image generating AIs, but they are still harder to use effectively than their text-oriented relatives. GPTs are now pretty good at analysing and explaining imagery, identifying celebrities in photos, for instance, or explaining that a map of the world made up of chicken nuggets is a joke because maps are not usually created that way.

Midjourney is often said to be the best system for generating images from scratch, although it isn’t the easiest one to use. Like other, similar systems, it still struggles with certain kinds of images, notably fingers, which often appear six to each hand instead of five.

Another useful process is known as in-painting, where the system ingests an image and edits it. For instance it could replace a cocker spaniel with a Labrador, and adapt the background to make the new image seamless. This process is not yet good enough for use by WPP agencies, Lappas reports, but it is close.

Open source

On platforms like Hugging Face you can find open source models, which you can tailor to your own requirements. The open source models are not yet as powerful as the models from the tech giants and the AGI labs (OpenAI and DeepMind) but they are improving quickly, and according to a recently tweeted memo that was presented as a leak from Google, they will soon overtake the commercial models. Their interfaces are less user-friendly at the moment, but again, that will probably change quickly.

2024 is the year of video

Lappas thinks that image generation will be largely solved by the end of 2023, and that 2024 will be the year of video. He concludes by saying that now is the time to jump in and explore GPTs – for profit and for fun.

Related Posts