• V HOP
    link
    fedilink
    16 months ago

    Bubble in the sense that “many companies will fail” we can agree on. Companies like OpenAI will survive - lawsuits or not - and even if they were to fail due to the lawsuits the algorithms are known and e.g. Microsoft, who has a license to the tech would just hire the team and start over and let the corporate entity go bankrupt.

    But all of the “ChatGPT for field X” companies that are just razor-thin layers on top of OpenAI’s API, sure, they will almost all fail, and the only ones of them that won’t will be the ones that leverages initial investment into an opportunity to quickly pivot into something more substantial.

    A lot of people talk about AI as a bubble in the sense of believing the tech will go away, though, and that will never happen, because it’s useful enough.

    Regarding OpenAI’s market cap, I don’t agree - I think it’ll increase far more, unless they massively misstep, because even though it’s riding high on hype, they also still have big lead not down to their hype but down to actually being significantly ahead of even competitors like Google, and given the high P/E ratios in tech they don’t need to be the backend all that many big deployments behind big companies even just to field really stupid-simple uses that don’t really need the capabilities of GPT before they’ll justify that valuation.

    • @TootSweet@lemmy.world
      link
      fedilink
      English
      26 months ago

      it’s useful enough.

      To whom in what endeavor, though?

      You can’t really trust anything an LLM says because of hallucinations. What’s the use case for an algorithm that gives you convincingly-worded but very likely false answers to your questions? Or writes professional-sounding documents filled with lies?

      And if you’ve got people fact checking your LLM’s output, is the LLM really benefitting anybody?

      We haven’t found an algorithm yet that a) is general purpose, b) produces trustworthy output, and c) doesn’t require specialized skills or babysitting to operate. And the current algorithms can’t really be retrofitted to make them fit these criteria.

      ChatGPT is a cool parlor trick. But the first actually useful “AI” chat bot won’t run on the same algorithms or principles as ChatGPT.

      • V HOP
        link
        fedilink
        16 months ago

        You can’t really trust anything a human says because we’re frequently wrong yet convinced we’re right, or not nearly as competent as we think, yet we manage, because in a whole lot of endeavours being right often enough and being able to verify answers is sufficient.

        There are plenty of situations where they are “right enough” and/or where checking the output is trivial enough. E.g. for software development, where I can easily tell if the output is “right enough”, and where humans are often wrong, and where we rely on tests to verify correctness anyway.

        Having to cross-check results is a nuisance, but when I can e.g. run things past it on subjects I know well enough to tell if the answers are bullshit and where it can often produce answers better than a lot of actual software developers, it’s worth it. E.g. I recently had it give me a refresher on the algorithms to convert an Non-deterministic finite automata (NFA) to a deterministic finite automata (DFA) and it explained it perfectly (which is not a surprise; there will be plenty of material on that subject), but unlike if I’d just looked it up in google, I could also construct examples to test that I remembered it right and have it produce the expected output (which, yes, I verified was correct).

        I also regularly has it write full functions. I have a web application where it has written ca 80% of the code without intervention from me. Plenty of my libraries now have functions it has written.

        I use it regularly. It’s saving me more than enough time to justify both the subscription to ChatGPT and API fees for other use.

        As such, it is “actually useful” for me, and for many others.