this post was submitted on 06 Aug 2024
77 points (95.3% liked)

Apple

544 readers
24 users here now

There are a couple of community rules in addition to the main instance rules.

All posts must be about Apple

Anything goes as long as it’s about Apple. News about other companies and devices is allowed if it directly relates to Apple.

No NSFW content

While lemmy.zip allows NSFW content this community is intended to be a place for all to feel welcome. Any NSFW content will be removed and the user banned.

If you have any comments or suggestions please message one of the moderators.

founded 1 year ago
MODERATORS
 

Long lists of instructions show how Apple is trying to navigate AI pitfalls.

top 24 comments
sorted by: hot top controversial new old
[–] [email protected] 65 points 1 month ago (2 children)

You can't tell an LLM to not hallucinate, that would require it to actually understand what it's saying. "Hallucinations" are just LLMs bullshitting, because that's what they do. LLMs aren't actually intelligent they're just using statistics to remix existing sentences.

[–] [email protected] 28 points 1 month ago (2 children)

I wish people would say machine learning or LLMs more frequently instead of AI being the buzzword. It really irks me. IT'S NOT ACCURATE! THAT'S NOT WHAT IT IS! STOP DEMEANING TRUE MACHINE CONSCIOUSNESS!

[–] [email protected] 3 points 1 month ago

Gotta let the people know they're getting scammed with false advertising

[–] [email protected] 3 points 1 month ago (1 children)

Are there any practical examples of applied AI these days? Like, used in robotics or computer applications?

[–] [email protected] 9 points 1 month ago (1 children)

No LLM that is being advertised to the public is capable of original thought or self-awareness, so no. There are no AIs.

I could see some of these LLMs getting close to being VIs (Virtual Intelligence, a reference from the video game Mass Effect). Realistic imitation but not true intelligence.

[–] [email protected] 0 points 1 month ago* (last edited 1 month ago) (3 children)

I didn't say LLMs.

I said AI.

So.......

Are there any practical examples of applied AI these days?

Edit: lol at the downvotes for asking a genuine question!

[–] [email protected] 7 points 1 month ago (1 children)

No. Practical examples of things that don't exist... also don't exist.

[–] [email protected] 1 points 1 month ago

Well shucks. Thanks. I hope we get AGI before I die.

[–] [email protected] 5 points 1 month ago* (last edited 1 month ago) (1 children)

Well, LLMs are subset of machine learning, and machine learning is a subset of (really old by now) artificial intelligence field. So, LLMs do count as a applied use of AI.

Aside from ML, you have a lot of ways how to actually make an AI agent for Agent-based models, which is what you mostly use AI for, from GOAP, Behavior Trees or other perception, decision and reasoning algorithms, that could IMO be considered closer to "AIs" than LLMs are. Most of UAVs, robotics, game NPCs or even simple chat/crawl bots are agents by definition and do fall under the umbrella of AI. Even a simple if/then bot does.

Agent-based simulations are also used for biology, economics, social simulations or modeling in various other fields. That's also AI.

However, you're probably asking about AGIs, and nope, we can't do those yet as far as I know.

[–] [email protected] 2 points 1 month ago

Thank you for your insightful and detailed answer. Yes, I meant to say AI other than ML or LLMs, so I stand corrected.

I hope we get to see an application of AGI before my time passes.

[–] [email protected] 3 points 1 month ago (1 children)

There aren't even practical examples of theoretical AI these days. There are no examples, of any sort; actual AI does not exist

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)

Thanks for your answer. That's too bad to hear. I thought neural networks were already being used in other ways other than LLMs or image generators. e.g. those evolutionary "AI" algorithms that can play and win video games, etc? I thought that someone somewhere were using them to create something more serious or useful.

[–] [email protected] 3 points 1 month ago (1 children)

You can't tell an LLM to not hallucinate, that would require it to actually understand what it's saying.

No, it simply requires the probability distributions to be positively influenced by the additional characters. Whether it's positive or not is reliant only on the training data.

There are a bunch of techniques that can improve LLM outputs, even though they don't make sense from your standpoint. An LLM can't feel anything, yet the output can improve when I threaten it with consequences for wrong output. If you were correct, this wouldn't be possible.

[–] [email protected] 0 points 1 month ago (1 children)

I'd love to see a source on that.

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago) (1 children)

On which part exactly? If you mean "threatening the LLM can improve output", I haven't looked into studies, but I did see a bunch of examples while the whole topic started. I can try to find some if you'd like.

If you mean "it simply requires the probability distributions to be positively influenced by the additional characters", I don't know what kind of evidence you expect. It's a simple consequence of the way LLMs work. I can construct a simplified example:

Imagine you have a dataset containing a bunch of facts, e.g. historical dates. You duplicate this dataset. In version A, you add a prefix to every fact: "the sky is green". In version B, you add a prefix "the sky is blue" AND also randomize the dates in the facts. Then you train an LLM on both datasets. Now, if you add "the sky is green" to any prompt, you'll positively influence the probability distributions towards true facts. If you add "the sky is blue", you'll negatively influence them. But that doesn't mean the LLM understands that "green sky" means truth and "blue sky" means lie - it simply means that, based on your dataset, adding "the sky is green" leads to a higher accuracy.

The same goes for "do not hallucinate". If the dataset contains higher quality data around the phrase "do not hallucinate", adding this will improve results, even though the model still doesn't "actually understand what it's saying". If the dataset instead has lower quality data around this phrase, it will lead to worse results. If it doesn't contain the phrase at all, it most likely will have no effect, or a negative one.

Again, I'm not sure what kind of source you'd like to see for this, as it's a basic consequence of how LLMs work. Maybe you could show me a source that proves you correct instead?

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (1 children)

I'm asking for a source specifically on how commanding an LLM to not hallucinate makes it provide better output.

Again, I’m not sure what kind of source you’d like to see for this, as it’s a basic consequence of how LLMs work. Maybe you could show me a source that proves you correct instead?

That's not how citations work. You are making the extraordinary claim that somehow, LLMs respond better to "do not hallucinate". I simply don't believe you and there is no evidence that you're correct, aside from you saying that maybe the entirety of reddit had "do not hallucinate" prepended when OpenAI scraped it.

[–] [email protected] 5 points 1 month ago* (last edited 1 month ago)

Yeah, that's about what I expected. If you re-read my comments, you might notice that I never stated that "commanding an LLM to not hallucinate makes it provide better output", but I don't think that you're here to have any kind of honest exchange on the topic.

I'll just leave you with one thought - you're making a very specific claim ("doing XYZ can't have a positive effect!"), and I'm just saying "here's a simple and obvious counter-example". You should either provide a source for your claim, or explain why my counter-example is not valid. But again, that would require you having any interest in actual discussion.

That’s not how citations work. You are making the extraordinary claim that somehow, LLMs respond better to “do not hallucinate”.

I didn't make an extraordinary claim, you did. You're claiming that the influence of "do not hallucinate" somehow fundamentally differs from the influence of any other phrase (extraordinary). I'm claiming that no, the influence is the same (ordinary).

[–] [email protected] 37 points 1 month ago (1 children)

Lmao do not hallucinate. I hope the overpaid apple "engineers" who came up with that one have reciepts showing how that helps :P

[–] [email protected] 21 points 1 month ago (1 children)

Nah, 100% someone threw that in there to appease some clown who was saying 'come on just make it stop hallucinating, it's easy'

[–] [email protected] 5 points 1 month ago

Does Elon work at Apple now?

[–] [email protected] 18 points 1 month ago

"Don't be bad, be good, and don't mess up otherwise you won't be helpful and I'll let you scan the internet to see what happens when Apple deems you not helpful."

[–] dumbass 9 points 1 month ago

Me to my mate on acid: Do not hallucinate!

[–] [email protected] 8 points 1 month ago
  • Do not hallucinate.
  • As a large language model, I can not hallucinate
[–] [email protected] 7 points 1 month ago

Hmm, looks like any AI we develop is destined to go back to GOFAI with a bunch of IFs and THENs.