this post was submitted on 11 Jun 2024
20 points (81.2% liked)

AI

4063 readers
1 users here now

Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.

founded 3 years ago
top 10 comments
sorted by: hot top controversial new old
[–] [email protected] 10 points 3 months ago

Never really occurred to me before how huge a 10x savings would be in terms of parameters on consumer hardware.

Like, obviously 10x is a lot, but with the way things are going, it wouldn't surprise me to see that kind of leap in the next year or two tbh.

[–] [email protected] 6 points 3 months ago

Finally. Wrong answers to questions using my phone.

[–] [email protected] 2 points 3 months ago (2 children)

That would actually be insane. Right now, I still need my GPU and about 8-10 gigs of VRAM to run a 7B model tho, so idk how that's supposed to work on a phone. Still, being able to run a model that's as good as a 70B model but with the speed and memory usage of a 7B model would be huge.

[–] [email protected] 4 points 3 months ago (1 children)

I only need ~4 GB of RAM/VRAM for a 7B model, my GPU only has 6GB VRAM anyway. 7B models are smaller than you think, or you have a very inefficient setup.

[–] [email protected] 4 points 3 months ago (1 children)

That's weird, maybe I actually am doing something wrong. Is it because I'm using GGUF models maybe?

[–] [email protected] 1 points 3 months ago (1 children)

llama2 gguf with 2bit quantisation only needs ~5gb vram. 8bits need >9gb. Anything inbetween is possible. There are even 1.5bit and even 1bit options (not gguf AFAIK). Generally fewer bits means worse results though.

[–] [email protected] 1 points 3 months ago

Yeah, I usually take the 6bit quants, didn't know the difference is that big. That's probably why tho. Unfortunately, almost all Llama3 models are either 8B or 70B, so there isn't really anything in between but I find Llama3 models to be noticeably better than Llama2 models, otherwise I would have tried bigger models with lower quants.

[–] [email protected] 1 points 3 months ago (1 children)

I have never worked on machine learning, what does the B stand for? Billion? Bytes?

[–] [email protected] 2 points 3 months ago (1 children)

I think it's how many billion parameters the model has

[–] [email protected] 1 points 3 months ago