Skip to content
Can GPT actually so...
 
Notifications
Clear all

Can GPT actually solve optimization problems that are specified in English?

1 Posts
1 Users
0 Likes
506 Views
Avatar
Posts: 14
Customer
Topic starter
(@parabolicsocial)
Eminent Member
Joined: 9 months ago

The implementation of an optimization algorithm is a pretty complex example of Programming using English, a new paradigm which Ryan Chin and Lonnie Chrisman explored in this series in previous weeks.

If so, it might open doors to new gradient-free optimization techniques.

DeepMind posted a paper last week claiming results LLMs can in fact perform optimization using just English-language prompting.

• Yang et al. (7-Sep-2023), "Large language models as optimizers", ArXiv 2309.0.409v1

They have three examples: Fitting linear regression coefficient to data, the travelling salesman problem, and prompt design. Prompt design is the most interesting of these -- the optimization finds the most successful way to word prompts for GPT or other LLMs. 

The idea that any of these examples work is pretty amazing, and we wanted to see it for ourselves. So, we reproduced their experiment for linear regression in Analytica using the Open AI API library.

The graph shows that GPT-4 (in blue) converges quickly, whereas GPT-3.5-turbo shows no convergence at all -- basically performing totally random search.

These results contradict the findings of the DeepMind paper. Like us, they found that GPT-4 succeeds; however, they claim that GPT-3.5-turbo also succeeds, albeit with a slower convergence rate.

We think they were just seeing the convergence of random search when you keep the best guess so far.

Share: