My employer is a Google Partner. They also use gSuite for most of the stuff that other organizations use O365 for. As a result, they've been "encouraging" us to use Gemini.
Today, I was running into weirdness when trying to port a SaltStack formula I'd written for RHEL to alsow work on Windows servers. I'm not a Windows guy. Worse, I was running into problems with Saltstack's functionality that "just worked" on Linux. So, I opted to avail myself of Gemini to try to diagnose what turned out to be a fairly pernicious problem with a common templating mechanism I was using.
While I shouldn't be surprised, it turns out that SaltStack's modules for Linux are a lot more robust than for Windows. I say, "I shouldn't be surprised," because I've run into similar problems with putatively cross-platform tools like Terraform …or even just different CSPs' CLIs and APIs. Which is to say, much like automation-frameworks for AWS are considerably more mature than for Azure, frameworks that work well for Linux-oriented configuration-management can be rage-inducing when you try to use them on Windows.
Ultimately, working with Gemini helped me dig far enough into the weeds to get to my solution. However, Gemini's "working memory" (compared to its "long-term" memory) is almost ridiculously small. At many points, it felt like I was working with a goldfish. It kept seeming to "forget" information I'd previously shared with it. So, it would ask for the same tests/results and make the same fix suggestions over and over again. In frustration, I asked it what the hell was going on:
As bad as the "forgetfulness" was, it also seemed to fasten onto provably dead diagnostic and fix paths. Which is to say, it would tell me to do stuff "we" had already tried, sometimes multiple times, and had shown to be either merely not helpful or created new problems.
Net result? I kept having to say, "goldfish: we did that already" or "goldfish: I told you to let that go" or "goldfish: how many times do I need to tell you that hard-coding shit isn't acceptable to me".
For all of that "memory" problem, when I asked for an end-of-day summary, it was able to summarize (though, not especially well) the things we'd done. I had to tell it "hey, you seem to have forgotten <THING>" ...which would prompt it to find <THING> and include it in its summary. All in all, not a particularly reassuring experience and one that continues to leave me feeling like "AI chat tools for coding-support isn't for people that aren't experienced enough to know better". Worse, they're probably also not for people whose own memory isn't above-average.

No comments:
Post a Comment