Fern

joined 1 year ago
[–] [email protected] 8 points 3 days ago (1 children)

Quite unnerving

[–] [email protected] 55 points 4 days ago (6 children)

That and the crisis of masculinity in a machismo culture which is being actively used against them by idiots everywhere.

 

Anyone have any advice for a nooby pirate trying to find some textbooks at the decent price of free-fiddy?

[–] [email protected] 5 points 6 days ago (1 children)

Too true, also what we call civility politics. I wouldn't be surprised if corporate backers prefer it that way.

[–] [email protected] 23 points 1 week ago (1 children)

I just watched Tim Pool say that traitors should get the death penalty yesterday...

[–] [email protected] 8 points 2 weeks ago (1 children)

More would be great. What sort of arguments did you make? We're you discussing the science?

[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Forgive me for being suspicious of your comment. There is a huge anti-vegan bias in society, and many argue against veganism, not in good faith. Can you provide any examples of the mods doing this?

[–] [email protected] 1 points 2 weeks ago (1 children)

Have you looked at any science on this issue? Or are you just using cOmMoN SeNsE to decide who should have their brain removed?

[–] [email protected] 14 points 2 weeks ago

But it looks like he posted the picture and someone named Zeek wrote the sexual stuff. Can't even see what he wrote.

[–] [email protected] 2 points 2 weeks ago (1 children)

Doh! Didn't mean to link to a specific time in the video.

 

Even though I'm on here, I honestly lurk most of the time and don't fully understand the activitypub vs at protocol war. This was a great explainer that will reach a lot of people. Really appreciate a lot of David's takes. I hope David and others at MKBHD become aware of and talk about Lemmy soon too.

[–] [email protected] 21 points 2 weeks ago (4 children)

Is this edited? Gowron is so clear looking.

[–] [email protected] 4 points 2 weeks ago* (last edited 2 weeks ago)

Definitely. The thing you might want to consider as well is what you are using it for. Is it professional? Not reliable enough. Is it to try to understand things a bit better? Well, it's hard to say if it's reliable enough, but it's heavily biased just as any source might be, so you have to take that into account.

I don't have the experience to tell you how to suss out its biases. Sometimes, you can push it in one direction or another with your wording. Or with follow-up questions. Hallucinations are a thing but not the only concern. Cherrypicking, lack of expertise, the bias of the company behind the llm, what data the llm was trained on, etc.

I have a hard time understanding what a good way to double-check your llm is. I think this is a skill we are currently learning, as we have been learning how to sus out the bias in a headline or an article based on its author, publication, platform, etc. But for llms, it feels fuzzier right now. For certain issues, it may be less reliable than others as well. Anyways, that's my ramble on the issue. Wish I had a better answer, if only I could ask someone smarter than me.


Oh, here's gpt4o's take.

When considering the accuracy and biases of large language models (LLMs) like GPT, there are several key factors to keep in mind:

1. Training Data and Biases

  • Source of Data: LLMs are trained on vast amounts of data from the internet, books, articles, and other text sources. The quality and nature of this data can greatly influence the model's output. Biases present in the training data can lead to biased outputs. For example, if the data contains biased or prejudiced views, the model may unintentionally reflect these biases in its responses.
  • Historical and Cultural Biases: Since data often reflects historical contexts and cultural norms, models might reproduce or amplify existing stereotypes and biases related to gender, race, religion, or other social categories.

2. Accuracy and Hallucinations

  • Factual Inaccuracies: LLMs do not have an understanding of facts; they generate text based on patterns observed during training. They may provide incorrect or misleading information if the topic is not well represented in their training data or if the data is outdated.
  • Hallucinations: LLMs can "hallucinate" details, meaning they can generate plausible-sounding information that is entirely fabricated. This can occur when the model attempts to fill in gaps in its knowledge or when asked about niche or obscure topics.

3. Context and Ambiguity

  • Understanding Context: While LLMs can generate contextually appropriate responses, they might struggle with nuanced understanding, especially in cases where subtle differences in wording or context significantly change the meaning. Ambiguity in a prompt or query can lead to varied interpretations and outputs.
  • Context Window Limitations: LLMs have a fixed context window, meaning they can only "remember" a certain amount of preceding text. This limitation can affect their ability to maintain context over long conversations or complex topics.

4. Updates and Recency

  • Outdated Information: Because LLMs are trained on static datasets, they may not have up-to-date information about recent events, scientific discoveries, or new societal changes unless explicitly fine-tuned or updated.

5. Mitigating Biases and Ensuring Accuracy

  • Awareness and Critical Evaluation: Users should be aware of potential biases and inaccuracies and approach the output critically, especially when discussing sensitive or fact-based topics.
  • Diverse and Balanced Data: Developers can mitigate biases by training models on more diverse and balanced datasets and employing techniques such as debiasing algorithms or fine-tuning with carefully curated data.
  • Human Oversight and Expertise: Where high accuracy is critical (e.g., in legal, medical, or scientific contexts), human oversight is necessary to verify the information provided by LLMs.

6. Ethical Considerations

  • Responsible Use: Users should consider the ethical implications of using LLMs, especially in contexts where biased or inaccurate information could cause harm or reinforce stereotypes.

In summary, while LLMs can provide valuable assistance in generating text and answering queries, their accuracy is not guaranteed, and their outputs may reflect biases present in their training data. Users should use them as tools to aid in tasks, but not as infallible sources of truth. It is essential to apply critical thinking and, when necessary, consult additional reliable sources to verify information.

[–] [email protected] 0 points 2 weeks ago (1 children)
 

Lemmyversers, I'm looking for some help developing a new mnemonic device.

Inspired by a video by Epic Spaceman, where he explains a handy system for comparing the size of things from a banana to an atom, I’ve come up with a mnemonic device to aid in remembering these scales.

He lists items, each smaller than the previous by a factor of 10:

It goes:

  • Banana
  • Coin
  • Edge of the coin
  • Waterbear/microorganism
  • Red blood cell
  • Bacteria
  • "Good virus"/Bacteriophage
  • Corona Virus/"Bad Virus"
  • DNA
  • Atom

So a coin is roughly 1/10 a banana, and the edge of that coin is roughly 1/10 the size if that coin.

It gives good references for thinking about other things if similar size. A sort of banana for scale at each factor of 10.

And allows you to quickly determine approximations like Covid is roughly 1000 times smaller than a red blood cell. Or an atom is roughly 1 billion times smaller than a banana. (That doesn't sound right. Is that actually right?)

Do you think that's a useful memory tool? And are these best touchstones for scale at each level?

The mnemonic I've come up with for it as you may have guessed, is:

  • Be
  • Cool
  • Even
  • When
  • Really
  • Big
  • Goblins
  • Casually
  • Drop
  • Acid

Do you have any better ideas or tweaks you"d recommend for the mnemonic or the touchstones?

Would this be helpful when trying to wrap your head around the scale of the micro?

Also, what would make for a good macro version of this? Where everything got bigger by a factor of 10?

3
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

Got some ideas/requests.

This is my favorite app so far. Love the swipe controls and the compact UI. Liking the snappiness of it all.

Ideas/requests

  • Add direct messaging (I can see my messages but can't respond to them).
  • Add trending communities when you click search/a way to explore communities rather than having to type in the specific community you want.
  • Make it so clicking on "all" at the top of the screen opens a menu that gives you the option to switch between all, local, subscribed and specific subs and functions as a search.
  • Allow user to customize UI a bit more, specifically in compact mode I'd like to be able to switch the side that the preview thumbnail is on.
  • Show profile banners and have a way for users to customize their banner and profile pic.
  • Add an edit post function.
  • Add sidebars.
  • Ability to save drafts on posts and comments.

Keep up the amazing work! ʕ⁠·⁠ᴥ⁠·⁠ʔ--

view more: next ›