maegul

joined 2 years ago
MODERATOR OF
[–] [email protected] 9 points 2 days ago (1 children)

So ... how likely is a "de-woke-ification" of trek, presuming it survives?

[–] [email protected] 3 points 2 days ago

It seems to be almost entirely about temperature/climate ... warm weather = eat at night when it's cool.

Searching for some cultural factors then becomes the interesting question. Which places offer food in spite of their climate?

Scrolling through the linked page and its maps, South America, India and South East Asia (eg Thailand etc) stand out.

Southern cities in South America such as Buenos Aires seem to have a night life despite being rather south (and therefore presumably cool), compared at least to more northern/warmer cities.

Similarly, Indian cities seem to do late night food (which I've heard is generally a thing in Indian culture, to eat dinner relatively late) while south east asia, which I presume is just as warm, doesn't do late food.

[–] [email protected] 1 points 2 days ago

Oh yea I hear you.

[–] [email protected] 4 points 3 days ago

Yea, the "cheaper than droids" line in Andor feels strangely prescient ATM.

[–] [email protected] 22 points 3 days ago (7 children)

Not a stock market person or anything at all ... but NVIDIA's stock has been oscillating since July and has been falling for about a 2 weeks (see Yahoo finance).

What are the chances that this is the investors getting cold feet about the AI hype? There were open reports from some major banks/investors about a month or so ago raising questions about the business models (right?). I've seen a business/analysis report on AI, despite trying to trumpet it, actually contain data on growing uncertainties about its capability from those actually trying to implement, deploy and us it.

I'd wager that the situation right now is full a lot of tension with plenty of conflicting opinions from different groups of people, almost none of which actually knowing much about generative-AI/LLMs and all having different and competing stakes and interests.

[–] [email protected] 4 points 3 days ago (1 children)

AFAICT, it helps you pick an instance based on your interests, which only barely helps with the problem. If you’re new to the ecosystem, you typically just want to join in and see what’s going on before making any decisions. And you probably don’t want to bother with selecting criteria for a selection guide at all.

What I’m suggesting is clicking a button “Sign Up”, enter credentials, verify and done. Then allow the whole finding an instance process pan out naturally.

Part of the issue IMO is that how an instance advertises itself isn’t necessarily how it will be seen by someone … they need to see it for themselves.

[–] [email protected] 9 points 3 days ago
  1. Fix picking an instance. It’s an irredeemably bad UX, even for tech people who could run an instance if they wanted to. Gotta remove that as an initial UX barrier first, which would require a new layer of system with integration with all of the clients.
  2. Accept that this isn’t like mainstream social media and likely never will be, even if instance picking becomes easier for newcomers. So instead focus on what can be done well here. IMO it’s customisable community building.

Currently all the big fediverse platforms kinda suck at this, in part because it likely requires a bunch of features, but also because they’re all made in imitation of big social platforms that were always less “homely” and more engagement farms.

To bring normies, something new and unique needs to be offered. IMO there could be a rich ecosystem of content and structures and communities that draws people in.

My fear is that the protocol and federation are the limiting factors on this, and so I suspect some restructuring or redesign is necessary.

[–] [email protected] 3 points 3 days ago (3 children)

Yea, instead of a default instance, I think there should be a default system that assigns you to one of a set of participating “general” instances without you having to decide or think about it.

[–] [email protected] 7 points 3 days ago (2 children)

Just recently read your 2017 article on the different parts of the “Free Network”, where it was new to me just how much the Star Trek federation was used and invoked. So definitely interesting to see that here too!

Aesthetically, the fedigram is clearly the most appealing out of all of these. For me at least.

It seems though that using the pentagram may have been a misstep given how controversial it seems to be (easy to forget if you’re not in those sort of spaces). I liked the less pentagram styled versions at the bottom. I wonder if a different geometry could be used?

[–] [email protected] 2 points 3 days ago

I would think that it’s naturally an opt-in feature and therefore essentially fine with only a practical upside.

[–] [email protected] 1 points 3 days ago

Yea I know, which is why I said it may become a harsh battle. Not being in education, it really seems like a difficult situation. My broader point about the harsh battle was that if it becomes well known that LLMs are bad for a child’s development, then there’ll be a good amount of anxiety from parents etc.

 

While territorial claims are and will likely be heated, what struck me is that the area is right near the Drake Passage, in the Weddell Sea (which is fundamental to the world's ocean currents AFAIU).

I don't know how oil drilling in the antarctic could affect the passage, but still, I'm not sure I would trust human oil hunger with a 10ft pole on that one.

Also interestingly, the discovery was made by Russia, which is a somewhat ominous clue about where the current "multi-polar" world and climate change are heading. Antarctica, being an actual continent that thrived with life up until only about 10-30 M yrs ago, is almost certainly full of resources.

 

It's funny, at time of posting, many of the YT comments are very nostalgic about how much has happened in this 8 year period ... and I can't lie, I feel it too god damn it.

 

Seems like fertile ground for coming up with something fun and interesting ... a whole shadow universe that barely touches ours ... but I don't think I've ever seen it.

 

Rant …

spoilerI’m talking about Ash/Rook, obviously.

Just saw the film recently, and while it’s a bit of a love it or hate it film I think, the Rook character is I think objectively egregious.

The idea is good, IMO, in a number of ways, and I can understand that the film makers felt like it was all done with love and affection for Holm and the character. As a viewer, not necessarily onboard with how many cues the film was taking from the franchise, I noticed the silhouette of Rook pretty quickly and was quite happy/hyped to see where it would go.

But OMG the execution is unforgivable! And I feel like this is just so much of what’s wrong with Hollywood and VFX, and also indicates that some execs were definitely intervening in this film. Somewhat fortunately for the film, it had a low budget (AFAICT, by Wikipedia) and is making a profit.

But it’s no excuse to slap some bad CGI onto shots that were not designed for bad CGI. Close ups on the uncanny valley! Come on! AFAICT, bad CGI is often the result of a complete disconnect between the director and the VFX crew, in part because the VFX industry is kept at arms length from the film industry, despite (it because of) its massive importance.

That CGI is not something you do a close up on. No remotely decent director would have done that knowing the CGI looked like that. This is likely bad studio management creating an unworkable situation.

What could have worked much better IMO is don’t have the synth functioning well. Have its facial expressions and movements completely artificial and mechanical. Rely on the likeness of Holm and the AI voice (which did and generally do work well). Could have been done just with a well directed animatronic coupled with some basic CGI to enrich some textures and details. Instead we got a dumb “we’ll do it in post” and tortured some poor editor into cutting those shots together.

For many the film was a mixed bag. For me too. But this somehow prevents me from embracing it because I just don’t trust the people who made it.

… End rant.

 

A nice and fair comparison I thought. The main difference, it seems, was the styles of the two films, where a bunch of stylistic choices rather disparate from whether CGI was used or not separate the two.

My take after seeing furiosa was that it's biggest flaw was that its makers struggled with the expectations of Fury Road and I think these stylistic differences kinda support that, where I'd guess they felt like they had to go with a different look and not simply repeat Fury Road's aesthetic when in the end there may not have been much of a coherent artistic purpose behind those changes.

 

New genre just dropped!

I've liked some of the other things this guy has done, but didn't get into this track at first. As I kept watching though, I got more and more into it and am certain I'd be down for an album of this stuff.

 

Yes, I'm slow, sorry!

Now this may very well be excessive expectations. I had heard a few people say it's this year's Andor. IE, you should just watch it even if it's not the sort of thing you think you'd be into. Also, I've never played the games

I've just finished the first 2 episodes, and, for me, it's not bad, it's a kinda interesting world ... but there's a distinctly empty feeling and awkwardness to the show for me. Sometimes scenes feel like they're either filling time or still trying to find their rhythm. I'm not sure any of the dialogue has caught my ear (at all). I'm not sure I've picked up on any interesting stakes or mysteries. And I've often wondered about the directing (where I can't help but wonder if Jonathan Nolan's directing is more about trying to compete with his brother).

The soft tipping point for me was the Knight's fight with the Ghoul (episode 2) ... it just felt pointless and childish. The whole scene seemed to strangely lack any gravity or impetus. And I find myself ~2.5 hrs in and not caring about anything that's happening. It's a post nuclear apocalypse world, with some mutants, a naive bunker person, and a manipulative corporation or two doing sneaky shit ...

... dunno ... what am I missing? Should I just keep watching?

 

Watching this, and seeing more of these types of interviews from Corridor Crew, it struck me that it's filling the void left by death of DVDs/BluRays and their special features.

 

Intro

Having read through the macros section of "The Book" (Chapter 19.6), I thought I would try to hack together a simple idea using macros as a way to get a proper feel for them.

The chapter was a little light, and declarative macros (using macro_rules!), which is what I'll be using below, seemed like a potentially very nice feature of the language ... the sort of thing that really makes the language malleable. Indeed, in poking around I've realised, perhaps naively, that macros are a pretty common tool for rust devs (or at least more common than I knew).

I'll rant for a bit first, which those new to rust macros may find interesting or informative (it's kinda a little tutorial) ... to see the implementation, go to "Implementation (without using a macro)" heading and what follows below.

Using a macro

Well, "declarative macros" (with macro_rules!) were pretty useful I found and easy to get going with (such that it makes perfect sense that they're used more frequently than I thought).

  • It's basically pattern matching on arbitrary code and then emitting new code through a templating-like mechanism (pretty intuitive).
  • The type system and rust-analyzer LSP understand what you're emitting perfectly well in my experience. It really felt properly native to rust.

The Elements of writing patterns with "Declarative macros"

Use macro_rules! to declare a new macro

Yep, it's also a macro!

Create a structure just like a match expression

  • Except the pattern will match on the code provided to the new macro
  • ... And uses special syntax for matching on generic parts or fragments of the code
  • ... And it returns new code (not an expression or value).

Write a pattern as just rust code with "generic code fragment" elements

  • You write the code you're going to match on, but for the parts that you want to capture as they will vary from call to call, you specify variables (or more technically, "metavariables").
    • You can think of these as the "arguments" of the macro. As they're the parts that are operated on while the rest is literally just static text/code.
  • These variables will have a name and a type.
  • The name as prefixed with a dollar sign $ like so: $GENERIC_CODE.
  • And it's type follows a colon as in ordinary rust: $GENERIC_CODE:expr
    • These types are actually syntax specifiers. They specify what part of rust syntax will appear in the fragment.
    • Presumably, they link right back into the rust parser and are part of how these macros integrate pretty seamlessly with the type system and borrow checker or compiler.
    • Here's a decent list from rust-by-example (you can get a full list in the rust reference on macro "metavariables"):
      • block
      • expr is used for expressions
      • ident is used for variable/function names
      • item
      • literal is used for literal constants
      • pat (pattern)
      • path
      • stmt (statement)
      • tt (token tree)
      • ty (type)
      • vis (visibility qualifier)

So a basic pattern that matches on any struct while capturing the struct's name, its only field's name, and its type would be:

macro_rules! my_new_macro {
    (
        struct $name:ident {
            $field:ident: $field_type:ty
        }
    )
}

Now, $name, $field and $field_type will be captured for any single-field struct (and, presumably, the validity of the syntax enforced by the "fragment specifiers").

Capture any repeated patterns with + or *

  • Yea, just like regex
  • Wrap the repeated pattern in $( ... )
  • Place whatever separating code that will occur between the repeats after the wrapping parentheses:
    • EG, a separating comma: $( ... ),
  • Place the repetition counter/operator after the separator: $( ... ),+

Example

So, to capture multiple fields in a struct (expanding from the example above):

macro_rules! my_new_macro {
    (
        struct $name:ident {
            $field:ident: $field_type:ty,
            $( $ff:ident : $ff_type: ty),*
        }
    )
}
  • This will capture the first field and then any additional fields.
    • The way you use these repeats mirrors the way they're captured: they all get used in the same way and rust will simply repeat the new code for each repeated captured.

Writing the emitted or new code

Use => as with match expressions

  • Actually, it's => { ... }, IE with braces (not sure why)

Write the new emitted code

  • All the new code is simply written between the braces
  • Captured "variables" or "metavariables" can be used just as they were captured: $GENERIC_CODE.
  • Except types aren't needed here
  • Captured repeats are expressed within wrapped parentheses just as they were captured: $( ... ),*, including the separator (which can be different from the one used in the capture).
    • The code inside the parentheses can differ from that captured (that's the point after all), but at least one of the variables from the captured fragment has to appear in the emitted fragment so that rust knows which set of repeats to use.
    • A useful feature here is that the repeats can be used multiple times, in different ways in different parts of the emitted code (the example at the end will demonstrate this).

Example

For example, we could convert the struct to an enum where each field became a variant with an enclosed value of the same type as the struct:

macro_rules! my_new_macro {
    (
        struct $name:ident {
            $field:ident: $field_type:ty,
            $( $ff:ident : $ff_type: ty),*
        }
    ) => {
        enum $name {
            $field($field_type),
            $( $ff($ff_type) ),*
        }
    }
}

With the above macro defined ... this code ...

my_new_macro! {
    struct Test {
        a: i32,
        b: String,
        c: Vec<String>
    }
}

... will emit this code ...

enum Test {
    a(i32),
    b(String),
    c(Vec<String>)
}

Application: "The code" before making it more efficient with a macro

Basically ... a simple system for custom types to represent physical units.

The Concept (and a rant)

A basic pattern I've sometimes implemented on my own (without bothering with dependencies that is) is creating some basic representation of physical units in the type system. Things like meters or centimetres and degrees or radians etc.

If your code relies on such and performs conversions at any point, it is way too easy to fuck up, and therefore worth, IMO, creating some safety around. NASA provides an obvious warning. As does, IMO, common sense and experience: most scientists and physical engineers learn the importance of "dimensional analysis" of their calculations.

In fact, it's the sort of thing that should arguably be built into any language that takes types seriously (like eg rust). I feel like there could be an argument that it'd be as reasonable as the numeric abstractions we've worked into programming??

At the bottom I'll link whatever crates I found for doing a better job of this in rust (one of which seemed particularly interesting).

Implementation (without using a macro)

The essential design is (again, this is basic):

  • A single type for a particular dimension (eg time or length)
  • Method(s) for converting between units of that dimension
  • Ideally, flags or constants of some sort for the units (thinking of enum variants here)
    • These could be methods too
#[derive(Debug)]
pub enum TimeUnits {s, ms, us, }

#[derive(Debug)]
pub struct Time {
    pub value: f64,
    pub unit: TimeUnits,
}

impl Time {
    pub fn new<T: Into<f64>>(value: T, unit: TimeUnits) -> Self {
        Self {value: value.into(), unit}
    }

    fn unit_conv_val(unit: &TimeUnits) -> f64 {
        match unit {
            TimeUnits::s => 1.0,
            TimeUnits::ms => 0.001,
            TimeUnits::us => 0.000001,
        }
    }

    fn conversion_factor(&self, unit_b: &TimeUnits) -> f64 {
        Self::unit_conv_val(&self.unit) / Self::unit_conv_val(unit_b)
    }

    pub fn convert(&self, unit: TimeUnits) -> Self {
        Self {
            value: (self.value * self.conversion_factor(&unit)),
            unit
        }
    }
}

So, we've got:

  • An enum TimeUnits representing the various units of time we'll be using
  • A struct Time that will be any given value of "time" expressed in any given unit
  • With methods for converting from any units to any other unit, the heart of which being a match expression on the new unit that hardcodes the conversions (relative to base unit of seconds ... see the conversion_factor() method which generalises the conversion values).

Note: I'm using T: Into<f64> for the new() method and f64 for Time.value as that is the easiest way I know to accept either integers or floats as values. It works because i32 (and most other numerics) can be converted lossless-ly to f64.

Obviously you can go further than this. But the essential point is that each unit needs to be a new type with all the desired functionality implemented manually or through some handy use of blanket trait implementations

Defining a macro instead

For something pretty basic, the above is an annoying amount of boilerplate!! May as well rely on a dependency!?

Well, we can write the boilerplate once in a macro and then only provide the informative parts!

In the case of the above, the only parts that matter are:

  • The name of the type/struct
  • The name of the units enum type we'll use (as they'll flag units throughout the codebase)
  • The names of the units we'll use and their value relative to the base unit.

IE, for the above, we only need to write something like:

struct Time {
    value: f64,
    unit: TimeUnits,
    s: 1.0,
    ms: 0.001,
    us: 0.000001
}

Note: this isn't valid rust! But that doesn't matter, so long as we can write a pattern that matches it and emit valid rust from the macro, it's all good! (Which means we can write our own little DSLs with native macros!!)

To capture this, all we need are what we've already done above: capture the first two fields and their types, then capture the remaining "field names" and their values in a repeating pattern.

Implementation of the macro

The pattern

macro_rules! unit_gen {
    (
        struct $name:ident {
            $v:ident: f64,
            $u:ident: $u_enum:ident,
            $( $un:ident : $value:expr ),+
        }
    )
}
  • Note the repeating fragment doesn't provide a type for the field, but instead captures and expression expr after it, despite being invalid rust.

The Full Macro

macro_rules! unit_gen {
    (
        struct $name:ident {
            $v:ident: f64,
            $u:ident: $u_enum:ident,
            $( $un:ident : $value:expr ),+
        }
    ) => {
        #[derive(Debug)]
        pub struct $name {
            pub $v: f64,
            pub $u: $u_enum,
        }
        impl $name {
            fn unit_conv_val(unit: &$u_enum) -> f64 {
                match unit {
                $(
                    $u_enum::$un => $value
                ),+
                }
            }
            fn conversion_factor(&self, unit_b: &$u_enum) -> f64 {
                Self::unit_conv_val(&self.$u) / Self::unit_conv_val(unit_b)
            }
            pub fn convert(&self, unit: $u_enum) -> Self {
                Self {
                    value: (self.value * self.conversion_factor(&unit)),
                    unit
                }
            }
        }
        #[derive(Debug)]
        pub enum $u_enum {
            $( $un ),+
        }
    }
}

Note the repeating capture is used twice here in different ways.

  • The capture is: $( $un:ident : $value:expr ),+

And in the emitted code:

  • It is used in the unit_conv_val method as: $( $u_enum::$un => $value ),+
    • Here the ident $un is being used as the variant of the enum that is defined later in the emitted code
    • Where $u_enum is also used without issue, as the name/type of the enum, despite not being part of the repeated capture but another variable captured outside of the repeated fragments.
  • It is then used in the definition of the variants of the enum: $( $un ),+
    • Here, only one of the captured variables is used, which is perfectly fine.

Usage

Now all of the boilerplate above is unnecessary, and we can just write:

unit_gen!{
    struct Time {
        value: f64,
        unit: TimeUnits,
        s: 1.0,
        ms: 0.001,
        us: 0.000001
    }
}

Usage from main.rs:

use units::Time;
use units::TimeUnits::{s, ms, us};

fn main() {

    let x = Time{value: 1.0, unit: s};
    let y = x.convert(us);

    println!("{:?}", x);
    println!("{:?}", x);
}

Output:

Time { value: 1.0, unit: s }
Time { value: 1000000.0, unit: us }
  • Note how the struct and enum created by the emitted code is properly available from the module as though it were written manually or directly.
  • In fact, my LSP (rust-analyzer) was able to autocomplete these immediately once the macro was written and called.

Crates for unit systems

I did a brief search for actual units systems and found the following

dimnesioned

dimensioned documentation

  • Easily the most interesting to me (from my quick glance), as it seems to have created the most native and complete representation of physical units in the type system
  • It creates, through types, a 7-dimensional space, one for each SI base unit
  • This allows all possible units to be represented as a reduction to a point in this space.
    • EG, if the dimensions are [seconds, meters, kgs, amperes, kelvins, moles, candelas], then the Newton, m.kg / s^2 would be [-2, 1, 1, 0, 0, 0, 0].
  • This allows all units to be mapped directly to this consistent representation (interesting!!), and all operations to then be done easily and systematically.

Unfortunately, I'm not sure if the repository is still maintained.

uom

uom documentation

  • This might actually be good too, I just haven't looked into it much
  • It also seems to be currently maintained

F#

Interestingly, F# actually has a system built in!

 

I looked around and struggled to find out what it does?

My guess would be that it notifies you of when new posts are made to communities you subscribe to. But that sounds like a lot, so I'm really not sure.

Otherwise, is it me or does the wording here not speak for itself?

 

Report showing the shift in AI sentiment in the industry. Relatively in depth and probably coming from a pro-AI bias (I haven’t read the whole thing).

Last graph at the bottom was what I was linked to. Clearly shows a corner turning where those closer to the actual “product” are now sceptical while management (the last category in the chart) are more committed.

view more: next ›