jshdbdb a day ago

> we are loosing our monopoly over human level intelligence

Do people really believe if there is someone out there smarter or more reaourceful that society has monopoly control over what they do? There was never any monopoly on human intelligence.

Information Asymmetry or Knowledge Gaps are not new problems.

Nature is full of it. And there is no over night constant collapse happening because of it.

  • valiant55 17 hours ago

    It might be people awakening and finally realizing how ignorant they are.

    • Xiol32 14 hours ago

      Interesting that you chose "they" instead of "we".

mirawelner 15 hours ago

AI that is smarter than the average human has existed for a while. The average human isn't that bright. If we have an AI smarter than a smart human that would be impressive. But honestly average human hallucinations are worse that LLM hallucinations at this point.

(Sorry I work in a ML+Bio lab in the US and I'm depressed and angry)

pseudolus a day ago
  • globnomulous 12 hours ago

    Thanks. This op ed strikes me as just the usual breathless hype that nobody should take seriously.

    > I arrived at [these views] as a journalist who has spent a lot of time talking to the engineers building powerful A.I. systems, the investors funding it and the researchers studying its effects. And I’ve come to believe that what’s happening in A.I. right now is bigger than most people understand.

    > In San Francisco, where I’m based

    In other words, the man (a) is not an engineer or specialist, (b) gets his information secondhand (c) from the people most heavily invested, literally and figuratively, in these developments, and (d) even lives in the same city as many of these people. I don't normally like the term "echo chamber," because it's fine for example, for example, not to "teach both sides," but if these are his qualifications and his sources, he shouldn't be taken as seriously as even the marketing copy that these companies circulate.

    > I believe that the right time to start preparing for A.G.I. is now.

    Nobody, including the author, has satisfactorily explained to me what the hell this means. Even if the breathless hype is true, what are they saying I should do?

    Should I just, like, think real hard about it, because Something Big This Way Comes? If so, please show me the evidence that people are to be trusted when they make prognostications about the future of technology, even in the near term. If people were good at it, more of us would have retired after making millions on Nvidia a year or two ago.

    If instead I'm to believe there are practical consequences and measures that I should take, then the people saying this are, I think, making an incoherent, self-defeating argument:

    Is their argument that AGI, when it's available, will bear a close resemblance to the current tools, that we'll interact with them similarly, and that they'll be similarly integrated into the things that current integrate A(non-G)I?

    If so, then why do they think AGI will Change Everything, rather than just becoming another fucking customer-service chatbot?

    In turn, if they think AGI will Change Everything(TM) and will be qualitatively different from anything that came before (Ooo, "paradigm shift"), why would I waste my time on the current tools, especially considering their (in my experience) mediocrity and worthlessness for work of any significance? If my best hope for survival in the new technological order is to "start preparing," what is the prescription? It seems, conveniently, to be to buy in and pay the companies making tools unlike, and inferior to, the ones I'm supposed to expect in the near future -- or, at a minimum, just to use the stuff they make. Funny how that works, isn't it?

    No, thank you. I share Jaron Lanier's skepticism and profound boredom with these tools and his deep mistrust of the people selling and promoting them. They are autocomplete token predictors. They're statistical models on steroids. And they're becoming just another way for owners of capital to squeeze value for themselves from the labor of others without paying for it -- theft via matrices.

    I say this in every thread about AI, so I'll say it here as well: to say that AGI is just around the corner because LLMs producing polished-looking output is no different from mistaking special effects for the real thing, or looking at a picture of a dog on a computer screen and believing there is a real, actual dog inside the computer -- or that dogs inside computers are just around the corner.

    > Today, software engineers tell me that A.I. does most of the actual coding for them, and that they increasingly feel that their job is to supervise the A.I. systems.

    Utter nonsense. And to the extent that it's true, it is at least as true that these tools are effectively stack overflow copy+paste cud, albeit with less work on the part of the developer, which means users learn less, understand less, and are worse at their jobs.

jdbbd a day ago

> I arrived at them as a journalist who has spent a lot of time talking to the engineers

pfft engineers are too one dimensional and fixatated on the micro to have anything worthwhile to say about the macro. Most importantly they don't study people. And therefore are completly clueless about how people will react, especially the ones in power will react to what they build.