Ingrid’s Space

λnfx.f (n f x)

What Google Translate Can Tell Us About Vibecoding | Ingrid's Space

What Google Translate Can Tell Us About Vibecoding

There has been rather a lot of doomsaying (and perhaps astroturfing) lately about LLMs as the end of computer programming. Much of the discussion has been lacking nuance, so I’d like to add mine. I see claims from one side that “I used $LLM_SERVICE_PROVIDER to make a small throwaway tool, so all programmers will be unemployed in $ARBITRARY_TIME_WINDOW”, and from the other side flat-out rejections of the idea that this type of tool can have any utility.1 I think it best sheds light on these claims to examine them in the context of another field that’s been ahead of the curve on this: translation.

Google translate has been around for a while, and has gone through some technological iterations; I’m most interested in discussing its recent incarnations since the switch to neural machine translation in 2016. Over the years I’ve heard much made about how this is the end of translation and interpretation as professions. I suspect the people who say such things have never actually worked with translator or interpreter. The emblematic example I’ve encountered is “I went on holiday to Japan and we used Google Translate everywhere, there’s no need to hire an interpreter or learn Japanese anymore”. While this undoubtedly speaks for the usefulness of current machine translation technology, the second half of the sentence calls for some scrutiny, particularly “anymore”. I feel confident in asserting that people who say this would not have hired a translator or learned Japanese in a world without Google Translate; they’d have either not gone to Japan at all, or gone anyway and been clueless foreigners as tourists are wont to do.

Indeed it turns out the number of available job opportunities for translators and interpreters has actually been increasing. This is not to say that the technology isn’t good, I think it’s pretty close to as good as it can be at what it does. It’s also not to say that machine translation hasn’t changed the profession of translation: in the article linked above, Bridget Hylak, a representative from the American Translators Association, is quoted as saying “Since the advent of neural machine translation (NMT) around 2016, which marked a significant improvement over traditional machine translation like Google Translate, we [translators and interpreters] have been integrating AI into our workflows.”

To explain this apparent contradiction, we need to understand what it is translators actually do because, like us programmers, they suffer from having the nature of their work consistently misunderstood by non-translators. The laity’s image of a translator is a walking dictionary and grammar reference, who substitutes words and and grammatical structures from one language to another with ease, the reality is that a translators’ and interpreters’ work is mostly about ensuring context, navigating ambiguity, and handling cultural sensitivity. This is what Google Translate cannot currently do.

To give a simple example, Norwegian is an extremely closely related language to English and should be an easy translation candidate. The languages share a tonne of cognates, very similar grammar, and similar cultural context; even the idioms tend to translate verbatim. Yet there remain important cultural differences, and a particularly friction-prone one is Norwegian’s lack of polite language. It’s technically possible to say please in Norwegian (vær så snill, or vennligst), but Norwegians tend to prefer blunt communication, and these are not used much in practice. At the dinner table a Norwegian is likely to say something like “Jeg vil ha potetene” (literally “I will have the potatoes”, which sounds presumptuous and haughty in English) where a brit might say “Could I please have some potatoes?”. A good interpreter would have the necessary context for this (or ask for clarification if they’re not sure) and provide a sensitive translation, Google Translate just gives the blunt direct translation. You can probably work past such misunderstandings at dinner with your foreign in-laws (and people do), but it should be apparent why it’s inadvisable to subsititute Google Translate for an interpreter at a court hearing. And Norwegian is an easy case. Returning to our tourists, Japanese has wildly different grammar to English, including things like omitting subjects from sentences where it’s apparent from context. In many of these cases you can’t construct a grammatical English sentence without a subject, so Google translate will make one up. Would you be comfortable with a computer inserting a made up subject into your sentence?

All this is not to say Google Translate is doing a bad job. Were I given “Jeg vil ha potetene” with no context or ability to clarify and asked to translate it to English, I’d give the same answer. Maybe the person does want to be rude, how should I know? As a bilingual, I actually do make heavy use of Google Translate, but my use case isn’t “Here’s a block of text, translate it for me”. Instead I have more specific and subtle workflows like “I already know what I want to say, how to say it, and can navigate cultural nuance, but I’m not happy with my wording, I’d like to see the most statistically likely way someone else might phrase this” (A task language models really excel in, as it turns out). I suspect this is what Bridget Hylak meant when she said she has been integrating AI into her workflows (though I also suspect her tools and workflows are more sophisticated than mine).2

It’s a similar story for programming. I think it’s even fair to characterise us as translators, just from squishy humans that speak in ambiguity and cultural nuance, to computers that deal only in absolutes.3 There’s the added complication that we create new abstractions a lot more aggressively in programming languages, and that’s probably why it took machine translation to programming languagues a little while to catch up to machine translation between natural languages, but Big Tech™ chucked all of open source into a wood chipper, and we’re there now.

For what it’s worth, I don’t think it’s inconceivable that some future form of AI could handle context and ambiguity as well as humans do, but I do think we’re at least one more AI winter away from that, especially considering that today’s AI moguls seem to have no capacity for nuance, and care more about their tools appearing slick and frictionless than providing responsible output.


  1. It is reasonable to say that the tools have limited utility though, and that the utility is outweighed by their negative externalities. ↩︎

  2. Even though I’ve laid out this use case, I don’t intend to take this up in practice anytime soon. I don’t think it’s nearly a groundbreaking enough productivity gain to be worth ignoring the fraught ethical status of the current tools. ↩︎

  3. I’ve met plenty of programmers who really seem to believe our main function is to pump out code, and that more code is better. I’d like to think having a code-barfing machine will show them the error in this, but unfortunately I expect a lot of them will continue to survive on pure organisational dysfunction. ↩︎