Over the last few weeks, I’ve been writing about my experiments with Antigravity, or rather, with the (more or less) intelligent agents integrated into the editor. The results have been mixed: sometimes the agents proved to be very effective, accurately easing some complex or repetitive tasks; in other cases they didn’t accomplish anything worthwhile, only wasting a huge amount of time.
I am well aware that LLMs have poor memory, but I never imagined that I would suffer the consequences so quickly.
– Immagine generata da Google Gemini.
Note to the reader. This article complements the previous one, Antigravity: a driver written by AI, and should be read afterward. However, here’s a brief recap for the lazy readers.
As effective as Antigravity may be, digging a little deeper reveals that the agent-based systems working inside it, while helpful and capable at answering many complex questions, are not exempt from the usual issues of the large language models (LLMs) we’ve been dealing with for the past three years.