Sunday, July 7, 2024

AI things, year 2024.

This is a small update to the AI and you post from 2023.

      Why the need for alignment at all?
Can't we just let stuff happen at its own pace and maybe everything works out? But you see, we already have an example of misaligned powerful entites. They are called companies. The feedback loop was supposed to look like this: companies "care" about maximizing profit, customers choose the product they like the most, companies compete, customers reward them by buying, everyone is happy. And yet, when sufficiently powerful optimization pressure was applied to the task of profit maximization, something went wrong.
Look around you.
The first true user-hostile OS, win10, and its worthy successor, win11. The audio/video streaming services with monthly payments for the files you will never be allowed to own and will lose access to the moment you stop paying. The chipped printer cartridges. The cars that track and report your every move, some even film you.
Was there even a single end user who asked for such things? And yet not only these products and services exist, they are ubiquitous in the market.
In fact, this is not surprising. Companies and consumers need to bargain about the division of the total surplus of the trade, so the more powerful companies become, the better tactics and strategies they can figure out and the larger the portion of the surplus that goes to them becomes, while consumers' portion shrinks. In the limit the entire surplus goes to companies, so for consumers every purchase would suck so bad that it'd be barely worth it (maybe I should make a longer post about this, if there isn't one already somewhere). From time to time there are attempts to patch newly discovered avenues of abuse through introduction of new laws, but all of that is a bandaid approach which ignores the core issue. Btw, I'm not claiming to have a perfect solution, just pointing at the pattern.
This is a very general observation - when you want some result out of a system, but cannot specify exactly what you want and instead use a proxy measurement, the better that system is at optimization, the better it will be at finding creative solutions that maximize the proxy, but it will always come with more divergence from your true wanted result, because the proxy and the true want are not perfectly correlated and the result is extreme along some metric (just not the one you needed).

      Update 1.
Most likely sways of public opinion won't happen or, at best, will come too late.
I mean, come on. 5 to 20% of the time-to-AGI since the publication of the open letters has passed. What changes in public sentiment have you observed? AGI risks are reliably below the noise level in all "what are you worried about?" free response polls. Polls that ask "are you worried about X, yes or no?" are bullshit and should be discarded, to blurt out "yes" there responders just need a weak semantic association "X bad" without any understanding of the underlying problem, its severity and willingness to act.
Point number two: even much tamer problems like global warming, which are more easily understood by the public, had more time to gather support, start slowly and go on without discontinuities, allow simple understandable by most engineers and cheap not terribly expensive relative to the potential cost of straightforwardly tanking the damage solutions, are still not even close to being solved. Market response to the genuine worries of the population? Greenwashing, of course, why would you expect it to be any different? "We are deebly goncerned", "Look how much CO2 we are saving", "Now go buy our products". Protests? Happen occasionally, sometimes in stupid and always in ineffective ways. Rarely make news and don't have any discernible impact.
What would cause me to admit I might be wrong wrt this one: 10k protesters in one place with most of their signs directly referencing extinction risks, i.e. not a job-loss protest, which could be appeased by promises of the UBI.
> Might?
Definitely wrong at "opinion change won't happen" and might be wrong at "opinion change won't have any effect on the outcome". See the AGW example above.

      Update 2.
Quite likely political action will not save us.
One of the great showcases on how politicians work is a small part of the interview with the current head of the US Federal Trade Commission, you can watch it here (start at the linked timestamp, end at +30 seconds).
Look carefully. Does that seem like a face of a human about to face self-admitted Russian roulette odds of dying? Lolnope! It's more like a face of a hallucinating LLM thinking which token it should output next to get the most approval out of its trainers.
You shouldn't ignore what every politician says because they know the situation and then strategically choose to lie to deceive you.
You should ignore what every politician says because for them words are these strange air vibrations they do to win friends, discredit enemies and influence people, not something that bears any relevance to describing universe around you.
What would cause me to admit I might be wrong wrt this one: agreement between at least two of the US-EU-China.
What else? Significant drop in price of Nvidia and other AI-related stocks that is unrelated to unexpected mundane internal problems and is thus indicative of the long-term market sentiment (would be really difficult to operationalize if you'd wish to bet on it).
Chance they will sleepwalk into a solution? I would not exclude that as absolutely impossible, but how exactly would that happen and how brittle the result would be? And in the end it should include the whole world for it to work, remember. However, it is far, far more likely that politicians instead will be bribed lobbied into oblivion by the now rich AI companies who want no meaningful interference with their business.

      OpenAI fiasco.
If you are interested in more detailed coverage, go and read this, then this. The short version is: it was revealed earlier this year that OpenAI was forcing every leaving employee to sign a lifetime non-disparagement agreement with a non-disclosure clause. In plain English they had to promise not to speak badly of OpenAI for the rest of their lives and never tell anyone about the existence of the agreement itself. Why would anyone do such a thing? Well, if they didn't decide quickly OpenAI threatened to not let them in to the events where they could sell their company equity shares, effectively taking back a huge part of their past time salary! The tactic worked until it didn't and one day this information became public knowledge. Maybe it was a very principled employee who said "screw you, shitheads, I'm not signing and I'm letting everyone know even if it costs me", in which case I stand up and applaud the hero, we desperately need more people like him. Or maybe in a twist of irony someone accidentally missed a very tight deadline OpenAI imposed to manipulate people into signing and then decided to tell the world because there was nothing left to lose.
Of course after it became public and everyone began asking "WHAT THE FUCK, is this for real, how is that even legal??" OpenAI leadership went into full damage control mode. They didn't know about these arrangements and even if they did they didn't mean it and even if they did they didn't plan on exercising their option and even if they did they are sorry and can we please forget this small incident already?
Trust is asymmetric. If you need evidence that someone or something holds no ill will towards you, then every instance of "doesn't do a bad thing" is an incrementally smaller and smaller update, but a single instance of "does a bad thing" instantly falsifies the hypothesis. You should never trust a company anyways, because companies are not your friends, but this serves as a reminder that you should super-duper distrust OpenAI in particular. If this is how they treat their former cogs employees, then how do you expect them to treat you, the moneybag customer? OpenAI is worse than Amazon, worse than Microsoft, worse than Google. Avoid at all reasonable costs.

      AI makes music.
I said that "I'd be moderately surprised if by the end of 2024 we don't have algorithmic generation of music" and the surprise didn't happen. There are now at least two startups offering just that to the general public. Sure, it's not yet at the stage it beats human musicians (who need to spend hundreds of hours on their works, by the way), it's probably not even at the stage where you can ask it to make a touhou remix of something (I didn't try), but have you looked at the rate of progress recently? One day there is nothing, half a year later you have a model that recognizes genres and synthesizes music, does text-to-vocals in multiple languages, costs so little it can be offered as a free trial to everyone, and almost everyone goes "uh, ok, nothing out of ordinary here". Image generation took several years to get to an equivalent stage in its own domain.
I'm surprised for other reasons, though. In case the model was trained purely on copyright-free music I don't understand how it got so good at everything, in case it was trained on paid copyrighted music I don't understand how they could get so much data without going broke, in case it was trained on copyrighted music without paying I don't understand how the startup founders expect not to get caught at some point. Maybe they paid for the content itself, but not for the rights to use it in any way they'd like and hoped that either music organizations won't notice them, or won't sue or in case they do the lawsuits will either get resolved in their favor or at least get dragged on for long enough not to matter? We'll see.
[ Just in case it sounds like I'm defending the concept of copyright - I'm only discussing practical implications of the current legal system. Copyright should be abolished entirely. The good news is that it will likely happen within our lifetimes, let's say in about 4 to 19 years since this post. The bad news... ]

      AI as destabilizing factor.
For every decisionmaker of a nation that has some amount of military rivalry with the countries developing AGI merely the lightweight version of the AI risk argument, the one that says AI will greatly accelerate the rate of technological development, presents a challenge if they take it seriously. Fairly early into the Cold War we settled on MAD equilibrium and it sort of worked. Uneven progress risks upsetting that. Greater tech disparity would at first endanger second strike capabilities, necessitating to adopt launch on warning again, which is more prone to accidents in a hurry. Going even further, one could reasonably worry that a stealth first strike by the higher tech owner could fully disarm the defending side, which would lead to an even more exciting discussions. Of course, even if the race leader would prefer and would choose not to attack in all but most extreme circumstances, for risk aversion or other reasons, the mere possibility they could do so if wanted is enough. The main reason which could prevent escalation I think is the fact that most politicians are senile dumbtards who can't see further than their own nose in everything but political matters, and would ignore reality staring at them until it's too late.
But wait. If such conflict started and did not get too out of hand, assuming that chipmaking factories and chip factory equipment producing factories got hit, wouldn't that actually be... good news, at least for those not directly involved? Short term (a decade or two), fairly likely, as long as it did not get too out of hand. Long term is less clear, on one hand AI would at least partially be blamed for what has happened purely by existing and triggering the exchange, so maybe every country would just voluntarily refrain from building it again preemptively collectively beat the crap out of anyone attempting to build it again? On the other we'd live in a world with less cooperation and thus reduced potential for AI nonproliferation treaties, the main problem of alignment wouldn't be solved, just delayed and all the knowledge about algorithms and techniques used to build AIs won't magically disappear and won't need to be rediscovered. Overall, in my opinion, the transition to general AI and then to superintelligence will happen so quickly, that almost no one will react in time for this to be an issue worth contemplating.

      Closing notes.
Originally I stated my expectation of things going not too well for everyone as 50%, but this was not an entirely precise formulation. If you are asked "what is the probability of a single fair coin landing heads or tails?", then the only reasonable answer is 50%. However, if you are asked "what is the probability of this potentially biased one-bit-output RNG landing heads or tails?", then, in the absence of annoying trickery the reasonable answer would still be 50%, but purely because it'd be mean and median of a symmetric prior probability PDF (yes, that's indeed a "probability probability distribution function", a rare case of a false alarm for an oversensitive RAS syndrome detector), unlike the delta-function of the coin in the previous sentence. In other words, we're dealing here not just with "normal" uncertainty of an outcome of the evolution of an unpredictable system, but also with the meta-uncertainty of not knowing what exactly we're dealing with in the first place. In other other words, one number is a too coarse of a description, which, while true, leaves out more detail than I'd like. Full information would be contained in the shape of the PDF, however I think that's excessive detail, because maybe people don't exactly think in those terms. A fine compromise is two numbers: a lower/upper bound, so that you would find it hard to be convinced (would require a lot of arguments/evidence) that the "real answer" is outside these bounds ("real answer" in quotes because probabilities exist in your head, the territory deals with outcomes). Back then I was probably thinking of something like 10-90%, which now feels more like 50-90%. It's still far from certainty, but if I were an outside observer betting, then with even odds I'd definitely bet against.
> Why not lower than 50% or lower than 90%?
How do you expect this to play out that doesn't end badly? Next AI winter? First, it needs to happen, second, it is only a delay. Moore's law breaks down just in time? Even with current hardware there is plenty of space for horizontal growth. BCIs come first? Interfacing with biology is complicated for both technical and legal reasons.
> Why not higher than 90% (upper bound)?
Mostly meta-uncertainty, I guess? Reserving a small corner where everyone worrying today can be totally wrong about this somehow, visible only in retrospect.
> Why not higher than 50% (lower bound)?
Again, because of uncertainty and optimism. Maybe doing things we don't know how to formally specify (imparting values) to the systems we don't understand turns out for some magical reason to be easier than impossible AND whoever ends up in charge of setting initial dynamics happens to be nice? Endgame chaos and disruption that... make the situation better rather than worse? I seriously don't know why I keep querying myself and getting back an estimate this low, even though I just can't see a good path forward.


To summarize: I think we are in deep shit. Most people don't know where we are headed, out of those who do many are delusional in thinking it is not a problem or that it will be easily solved, out of those who remain no one has a reliable plan. The help isn't coming and the deadline is approaching rapidly. I'm sorry if that doesn't sound comfy, but I don't have anything uplifting to say. If any of you are aliens, time-travelers or espers, better start acting now.

6 comments:

  1. The last paragraph is truly feels depressive. When I discussed the idea of an AI disaster with those who understood the essence of the problem, none of us could give a clear answer that even remotely resembled a solution. Obviously no one believes in unrealistic scenarios where all of humanity suddenly becomes conscious and stops developing AI. But there was always hope that someone smarter and more influential was already working on the solution to the disaster and we should only wait for the salvation. Reading all of this and realizing that, indeed, even those remaining who understand the scale of the fucked up have no clear way to prevent the catastrophe and save humanity, feels despair.
    I actually also don't have anything to say... Just was glad to see that someone somewhere there has the same fears and worries about our disappointing future. Hope you find some comfort in fact that you are not alone in your thoughts and concerns. It's the only reason why I wrote this. Take care.

    ReplyDelete
  2. Meh.

    I think you're taking this too seriously. Not in the sense that I'm trying to downplay the risk of AGI (my comment on the previous post should explain as much), but in the sense that you're letting something that you personally cannot do anything about affect your mental state to this degree. While I agree that if nobody talks about it, nothing will happen, many AI researchers are infact talking about it (when they're not busy advancing AI tech of course). I don't know whether achieved much of anything yet. AI security is still a branch that very few AI researchers and people in general actually care about, despite the constant attempts at publicity.

    In a sense, I don't think it's worth worrying about things that are directly outside the purview of what you can affect personally. It's needless worry that achieves nothing aside from making you stressed out. Can you meaningfully contribute to AI security research? If you can, you should since it's bothering you this much. If you can't, then why worry? If in your own words, the time we have left is limited, we as individuals should spend that time enjoying our time here rather than worrying about the apocalypse that'll eventually come in a way that's beyond our (the average individual, you and I, not the collective) control.

    As a tangent, I'm a JP->EN translator. Have been for 4 years. For the past 10 years, I've been seeing constant discourse around how "AI will take our jobs!", and yet I waited 5 years, gave up, learned Japanese myself and started doing it. Other languages (French, Spanish, etc) are more affected by this, significantly more so. Call it presumptuous of me, but I think many of the people in that field are significantly _worse_ than current "AI" at translation, and so are in panic mode. I don't know why I mentioned this. Perhaps I just wanted to highlight how if I spent the 4+ years of my current 'career' endlessly worrying about AI and thoughts of jumping ship to another field, I wouldn't have been able to focus on personal development and continue doing what I love, which is game translation work.

    Call it defeatist if you want, but I just can't seem to bring myself to care about it when I can't even begin to bring change to the industry I work in (you've seen how many bad lolcalizers there are). You can't convince me that I could somehow bring change to a wider number of people with their own thoughts and opinions, without spending a huge amount of time on each person only to potentially achieve nothing.

    Nothing wrong with caring, of course, and if you think what I said here doesn't apply to you then rock on my dude.

    ReplyDelete
    Replies
    1. Fact disagreement: "We can't change anything" - I don't think this is true.
      Value disagreement: "If you want something, but cannot obtain it, then you should stop wanting it" - I don't like this.

      >that's beyond our (the average individual, you and I, not the collective) control.
      That's an overcorrection. In a world of 8 billion people it is to be expected that an average individual has only one eight billionth part of control. Which might seem rather insignificant, especially after being conditioned by overpowered main characters in the modern entertainment media. Still, that is what you have to work with.
      If you only meant "an average individual cannot easily change everyone's future", then that's obvious; how would a reverse situation look like?

      >While I agree that if nobody talks about it, nothing will happen, many AI researchers are infact talking about it
      Still not enough. Average people heard nothing, know nothing of this problem and don't factor it in their decisions at all. Even those who do know often don't take it seriously.

      >In a sense, I don't think it's worth worrying about things that are directly outside the purview of what you can affect personally.
      Worrying as "trying to do something about it"? See "fact disagreement". Worrying as "feeling bad about it"? See "value disagreement".

      Delete
  3. > If any of you are aliens, time-travelers or espers, better start acting now.
    Heh, not sure you'd want that. Nagato and Asakura aren't just aliens, they're alien AIs, and not in the service of anything biological. Seems a reasonable guess they might have done in their creators. Let's just cut to the chase and appeal directly to God: start publishing Japanese light novels targeting teenage girls, with the message that AI needs to be destroyed forever. Can't be more restrained than that, unfortunately, or it would be too boring to grab Her attention.

    I agree that it's not productive to worry too much: circle of control/concern might just be the single best piece of advice to follow if you want a good life in the modern world. That said writing posts like these is still reasonable; convincing people is worth something.

    And who knows, maybe alignment can be, if not solved, approximated. The doomers take such an absolutist stance, I think they overlook the possibility of incremental progress. If you don't have a probably perfect system they'll always handwave "superintelligence will always come up with a way to beat you". I think there are meaningful steps that could be taken. Mundane security controls, and things like some of the "observe how the LLM 'thinks'"work done recently.

    IDK. Weird problem, very hard to reason about. Either this is all millenarialist insanity or the most urgent and important problem in history. Although I guess I'm trying to say maybe there's an in between: yes it's deadly serious but not as daunting as the people most scared assume it to be. Who knows.

    ReplyDelete
  4. *provably, not probably.

    ReplyDelete
  5. I left my CS program partly due to the direction AI is heading. I don't feel that passion anymore and don't want to get into that huge mess of implications. I don't know if I could be proud of my work by the time I'm out of school, when by that time some fucker could probably do something comparable with AI.

    > Can't we just let stuff happen at its own pace and maybe everything works out?
    Don't we all wish

    ReplyDelete