• 0 Posts
  • 37 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle

  • >dinner is done
    >announce dinner
    >everyone shows up 10 minutes later

    Some of the food is cold because I didn’t cover it, but why should I? It would have been fine if they came to the table when I announced dinner.

    >next day
    >dinner is almost done
    >remembering yesterday, I decide to announce dinner 10 minutes early so that 10 minutes later is “on time”
    >everyone arrives to the table immediately, remembering that it was cold yesterday not wanting that to happen again
    >mfw



  • The device wouldn’t necessarily have to be constantly streaming the audio to a central server. If it’s capable of hearing wake up words like “Ok Google” it’s capable of listening for other phrases and having onboard processing to relay back the results much more compressed. Whether or not this is common practice is another matter, and yes the algorithms are scary good even without eavesdropping.







  • That’s fair. I think fundamentally a false positive/negative isn’t that much different. Pretty much all tests—especially those dealing with real world conditions—are heuristic, as are all LLMs by necessity of the design. Hallucination is a pretty specific term given to AI as an attempt to assign agency to a system that doesn’t actually have any (by implying it’s crazy and making stuff up instead of a black box with deterministic inputs and outputs spitting out something factually wrong but with a similar format to what is trained on). I feel like the nature of any tool where “you can’t trust this to be entirely accurate” should have an umbrella term that encompasses both types of providing inaccurate info under certain conditions.

    I suppose the difference is that AI is a lot more likely to randomly go off, whereas a blood test is likelier to provide repeated false positives for the same person with their unique biology? There’s also the fact that most medical tests represent a true/false dichotomy or lookup table, whereas an LLM is given the entire bounds of language.

    Would an AI clustering algorithm (say, K-means for instance) giving an inaccurate diagnosis be a false positive/negative or a hallucination? These models can be programmed on a sliding scale and I feel like there’s definitely an area where the line could get pretty blurry.


  • I mean, AI is used in fraud detection pretty often; when it hits a false positive (which happens frequently on a population-level basis), is that not a hallucination of some sort? Obviously LLMs can go off the rails much further because it’s readable text, but any machine learning model will occasionally spit out really bad guesses almost any person could have done better with. (To be fair, humans are highly capable of really bad guesses too).









  • Hot take:

    Every time I see a Doctor Seuss parody that doesn’t respect the very strict meter that made his stuff flow so well, it’s always followed by about five minutes of me trying to fix it and then stopping because that was supposed to be the author’s responsibility. You can sneak in an extra syllable here or there, and there will be situations where it’s ambiguous based on word pronunciation, but any more than two syllables off and you should’ve workshopped it some more.

    Take all of these matters most seriously
    The gravest of grave should be clear
    To step out of meter where any could see
    Will only get side-glancing sneers

    And who, then, shall patch up this unfinished road
    Assembled with half-baked word stones?
    'Tis not my intent but I think it’s best flowed
    With a concrete from Onceler’s old bones