call_end

    • chevron_right

      Matter 1.4 has some solid ideas for the future home—now let’s see the support

      news.movim.eu / ArsTechnica • 8 November, 2024 • 1 minute

    Matter, the smart home standard that promises an interoperable future for home automation, even if it's scattered and a bit buggy right now , is out with a new version, 1.4 . It promises more device types, improvements for working across ecosystems, and tools for managing battery backups, solar panels, and heat pumps.

    "Enhanced Multi-Admin" is the headline feature for anybody invested in Matter's original promise, one where you can buy a device and it doesn't matter if your other gear is meant for Amazon (Alexa), Google, Apple, or whatever, it should just connect and work. With 1.4, a home administrator should be able to let a device onto their network just once, and then have that device picked up by whatever controller they're using. There have technically been ways for a device to be set up on, say, Alexa and Apple Home, but the process has been buggy, involves generating "secondary codes," and is kind of an unpaid junior sysadmin job.

    What's now available is "Fabric Sync," which sounds like something that happens in a static-ridden dryer. But "Fabrics" is how the Connectivity Standards Alliance (CSA) describes smart home systems, like Alexa or Google Home. In theory, with every tech company doing their best, you'd set up a smart light bulb with your iPhone, add it to your Apple Home, but still have it be able to be added to a Google Home system, Android phones included. Even better, ecosystems that don't offer controls for entire categories, like Apple and smart displays (because it doesn't make any), should still be able to pick up and control them.

    Read full article

    Comments

    • chevron_right

      The voice of America Online’s “You’ve got mail” has died at age 74

      news.movim.eu / ArsTechnica • 8 November, 2024

    On Tuesday, Elwood Edwards , the voice behind the online service America Online's iconic "You've got mail" greeting, died at age 74, one day before his 75th birthday, according to Cleveland's WKYC Studios , where he worked for many years. The greeting became a cultural touchstone in the 1990s and early 2000s in the early Internet era; it was heard by hundreds of millions of users when they logged in to the service and new email was waiting for them.

    The story of Edwards' famous recording began in 1989 when Steve Case, CEO of Quantum Computer Services (which later became America Online —or AOL for short), wanted to add a human voice to the company's Quantum Link online service. Karen Edwards, who worked as a customer service representative, heard Case discussing the plan and suggested her husband Elwood, a professional broadcaster.

    Edwards recorded the famous phrase (and several others) into a cassette recorder in his living room in 1989 and was paid $200 for the service. His voice recordings of "Welcome," "You've got mail," "File's done," and "Goodbye" went on to reach millions of users during AOL's rise to dominance in the 1990s online landscape.

    Read full article

    Comments

    • chevron_right

      Apple botched the Apple Intelligence launch, but its long-term strategy is sound

      news.movim.eu / ArsTechnica • 8 November, 2024

    Ask a few random people about Apple Intelligence and you’ll probably get quite different responses.

    One might be excited about the new features. Another could opine that no one asked for this and the company is throwing away its reputation with creatives and artists to chase a fad. Another still might tell you that regardless of the potential value, Apple is simply too late to the game to make a mark.

    The release of Apple’s first Apple Intelligence-branded AI tools in iOS 18.1 last week makes all those perspectives understandable.

    Read full article

    Comments

    • chevron_right

      TSMC will stop making 7 nm chips for Chinese customers

      news.movim.eu / ArsTechnica • 8 November, 2024

    Taiwan Semiconductor Manufacturing Company has notified Chinese chip design companies that it will suspend production of their most advanced artificial intelligence chips, as Washington continues to impede Beijing’s AI ambitions.

    TSMC, the world’s largest contract chipmaker, told Chinese customers it would no longer manufacture AI chips at advanced process nodes of 7 nanometers or smaller as of this coming Monday, three people familiar with the matter said.

    Two of the people said any future supplies of such semiconductors by TSMC to Chinese customers would be subject to an approval process likely to involve Washington.

    Read full article

    Comments

    • chevron_right

      Notepad.exe, now an actively maintained app, has gotten its inevitable AI update

      news.movim.eu / ArsTechnica • 8 November, 2024

    Among the decades-old Windows apps to get renewed attention from Microsoft during the Windows 11 era is Notepad, the basic built-in text editor that was much the same in early 2021 as it had been in the '90 and 2000s. Since then, it has gotten a raft of updates, including a visual redesign , spellcheck and autocorrect , and window tabs .

    Given Microsoft's continuing obsession with all things AI, it's perhaps not surprising that the app's latest update (currently in preview for Canary and Dev Windows Insiders) is a generative AI feature called Rewrite that promises to adjust the length, tone, and phrasing of highlighted sentences or paragraphs using generative AI. Users will be offered three rewritten options based on what they've highlighted, and they can select the one they like best or tell the app to try again.

    Rewrite appears to be based on the same technology as the Copilot assistant, since it uses cloud-side processing (rather than your local CPU, GPU, or NPU) and requires Microsoft account sign-in to work. The initial preview is available to users in the US, France, the UK, Canada, Italy, and Germany.

    Read full article

    Comments

    • chevron_right

      Review: M4 and M4 Pro Mac minis are probably Apple’s best Mac minis ever

      news.movim.eu / ArsTechnica • 7 November, 2024

    The Mac mini will celebrate its 20th birthday in January. And I think the M4 version of the Mac mini is far and away the most appealing one the company has ever made.

    When it was introduced during the white plastic heyday of peak iPod-era Apple, the Mac mini was pitched as the cheapest way to buy into the Mac ecosystem. It was $499. And despite some fluctuation (as high as $799 for the entry-level 2018 mini, $599 for this year's refresh), the Mac mini has stayed the cheapest entry-level Mac ever since.

    But the entry-level models always left a lot to be desired. The first Mac mini launched with just 256MB of RAM, a pretty anemic amount even by the standards of the day. The first Intel Mac mini in 2006 came with a single-core Core Solo processor, literally the last single-core Mac Apple ever released and the only single-core Intel Mac. The 2018 Mac mini's Core i3 processor left a lot to be desired for the price. The 8GB of RAM included in the basic M1 and M2 Mac minis was fine for many things but left very little headroom for future growth.

    Read full article

    Comments

    • chevron_right

      Thoughts on the M4 iMac, and making peace with the death of the 27-inch model

      news.movim.eu / ArsTechnica • 7 November, 2024 • 1 minute

    The M4 iMac is a nice computer.

    Apple's addition of 16GB RAM to the basic $1,299 model makes it a whole lot more appealing for the vast majority of people who just want to take the computer out of the box and plunk it on a desk and be done. New USB-C accessories eliminate some of the last few Lightning ports still skulking around in Apple's lineup. The color options continue to be eye-catching in a way that evokes the original multicolored plastic ones without departing too far from the modern aluminum-and-glass Apple aesthetic. The $200 nano-texture display option, included in the review loaner that Apple sent us, is lovely, though I lightly resent having to pay more for a matte screen.

    The back of the iMac, where the color is the most visible. Credit: Andrew Cunningham
    New USB-C accessories. Yes, the charging port is still on the bottom. Credit: Andrew Cunningham
    A mildly improved 12MP webcam with a wide enough field of view to support Desk View mode in macOS. Credit: Andrew Cunningham
    For models with an Ethernet port, it's still on the power brick, not the back of the machine. Credit: Andrew Cunningham

    This is all I really have to say about this iMac, because it's externally nearly identical to the M1 and M3 versions of the same machine that Apple has been selling for three years now. The M4 isn't record-setting fast, but it is quick enough for the kinds of browsing and emailing and office stuff that most people will want to use it for—the fully enabled 10-core version is usually around as fast as a recent Intel Core i5/Core Ultra 5 or an AMD Ryzen 5 desktop CPU, though using just a fraction of the power, and with a respectable integrated GPU that's faster than anything Intel or AMD is shipping in that department.

    Read full article

    Comments

    • chevron_right

      Computing International Call Rates with a Trie

      Stephen Paul Weber • 13 April, 2022 • 4 minutes

    A few months ago we launched International calling with JMP.  One of the big tasks leading up to this launch was computing the rate card: that is, how much calls to different destinations would cost per minute.  While there are many countries in the world, there are even more calling destinations.  Our main carrier partner for this feature lists no fewer than 59881 unique phone number prefixes in the rates they charge us.  This list is, quite frankly, incomprehensible.  One can use it to compute the cost of a call to a particular number, but it gives no confidence about the cost of calls in general.  Many items on this list are similar, and so I set out to create a better list.

    My first attempt was a simple one-pass algorithm.  This would record each prefix with its price and then if a longer prefix with a different price were discovered it would add that as well.  This removes the most obvious effectively-duplicate data, but still left a very large list.  I added our markup and various rounding rules (since increments of whole cents are easier to understand in most cases anyway, for example) which did cut down a bit further, but it became clear that the one-pass was not going to be sufficient.  Consider:

    1. +00 at $0.01
    2. +0010 at $0.02
    3. +0011 at $0.02
    4. +0012 at $0.02
    5. +0013 at $0.02
    6. +0014 at $0.02
    7. +0015 at $0.02
    8. +0016 at $0.02
    9. +0017 at $0.02
    10. +0018 at $0.02
    11. +0019 at $0.02

    There are many sets of prefixes that look like this in the data.  Of course the right answer here is that +001 is $0.02, which is much easier to understand than this list, but the algorithm cannot know that until it has seen all 10 overlapping prefixes.  Even worse:

    1. +00 at $0.01
    2. +0010 at $0.02
    3. +0011 at $0.02
    4. +0012 at $0.02
    5. +0013 at $0.02
    6. +0014 at $0.02
    7. +0015 at $0.03
    8. +0016 at $0.02
    9. +0017 at $0.02
    10. +0018 at $0.02
    11. +0019 at $0.02

    From this input we would like:

    1. +00 at $0.01
    2. +001 at $0.02
    3. +0015 at $0.03

    So just checking if the prefixes we have so far are a fully-overlapped set is not enough.  Well, no problem, it’s not that much data, perhaps I can implement a brute-force approach and be done with it.

    Brute force is very slow.  On this data it completed, but as I found I kept wanting to tweak rounding rules and other parts of the overlap detection the speed became really problematic.  So I was searching for a non-bruteforce way that would be optimal across all prefixes and fast enough to re-run often in order to play with the effects of rounding rules.

    Trie

    As I was discussing the problem with a co-worker, trying to speed up lookups we were thinking about trees.  Maybe a tree where traversal to the next level was determined by the next digit in the prefix?  As we explored what this would look like, it became obvious that we were inventing a Trie.  So I grabbed a gem and started monkeypatching things.

    Most Trie implementations are about answering yes/no questions and don’t store anything but the prefix in the tree.  I wanted to be able to “look down” from any node in the tree to see if the data was overlapping, and so storing rates right in the nodes seemed useful:

    def add_with(chars, rate)
        if chars.empty? # leaf node for this prefix
            @rate = rate
            terminal!
        else
            add_to_children_tree_with(chars, rate)
        end
    end

    But sometimes we have a level that doesn’t have a rate, so we need to compute its rate from the majority-same rate of its children:

    def rate
        # This level has a known rate already
        return @rate if @rate
    
        groups =
            children_tree.each_value.to_a         # Immediate children
            .select { |x| x.rate }                # That have a rate
            .combination(2)                       # Pairwise combinations
            .select { |(x, y)| x.rate == y.rate } # That are the same
            .group_by { |x| x.first.rate }        # Group by rate
        unless groups.empty?
            # Whichever rate has the most entries in the children is our rate
            @rate = groups.max_by { |(_, v)| v.length }.first
            return @rate
        end
    
        # No rate here or below
        nil
    end

    This algorithm is naturally recursive on the tree, so even if the immediate children don’t have a rate they will compute from their children, etc.  And finally a traversal to turn this all back into the flat list we want to store:

    def each
        if rate
            # Find the rate of our parent in the tree,
            # possibly computed in part by asking us
            up = parent
            while up
                break if up.rate
                up = up.parent
            end
    
            # Add our prefix and rate to the list unless parent has it covered
            yield [to_s, rate] unless up&.rate == rate
        end
    
        # Add rates from children also
        children_tree.each_value do |child|
            child.each { |x| yield x }
        end
    end

    This (with rounding rules, etc) cut the list from our original of 59881 down to 4818.  You can browse the result.  It’s not as short as I was hoping for, but many destinations are manageable now, and thanks to a little bit of Computer Science we can tweak it in the future and just rerun this quick script.

    • chevron_right

      Rust Factory Without Box (Trait Object)

      Slixfeed • 1 September, 2018 • 6 minutes

    I’ve been playing around a lot with Rust recently and it’s quickly becoming my second-favourite programming language. One of the things I’ve been playing with is some Object Oriented design concepts as they might apply. For example, consider this code:

    fn format_year(n: i32) -> String {
    	if n == 0 {
    		"0 is not a year".to_string()
    	} else if n < 0 {
    		format!("{} BC", -n)
    	} else {
    		format!("{} AD", n)
    	}
    }

    While maybe overkill for this small example, let’s go ahead and replace conditional with polymorphism:

    fn format_year(n: Box<Year>) -> String {
    	format!("{} {}", n.year(), n.era())
    }
    
    trait Year {
    	fn year(&self) -> u32;
    	fn era(&self) -> String;
    }
    
    impl Year {
    	fn new(n: i32) -> Box<Year> {
    		if n == 0 {
    			Box::new(YearZero())
    		} else if n < 0 {
    			Box::new(YearBC(-n as u32))
    		} else {
    			Box::new(YearAD(n as u32))
    		}
    	}
    }
    
    struct YearZero();
    
    impl Year for YearZero {
    	fn year(&self) -> u32 { 0 }
    	fn era(&self) -> String { "is not a year".to_string() }
    }
    
    struct YearBC(u32);
    
    impl Year for YearBC {
    	fn year(&self) -> u32 { self.0 }
    	fn era(&self) -> String { "BC".to_string() }
    }
    
    struct YearAD(u32);
    
    impl Year for YearAD {
    	fn year(&self) -> u32 { self.0 }
    	fn era(&self) -> String { "AD".to_string() }
    }

    This works, and really does seem to mimic the way this kind of design looks in a class-based Object Oriented language. It has a major disadvantage, however: all our objects are on the heap now (which is likely to cause performance issues). In some cases, this can be fixed by using CPS so that the trait objects could be borrowed references instead of boxed, but that’s both ugly and not always an option. One other design might be to use an enum:

    fn format_year(n: Year) -> String {
    	format!("{} {}", n.year(), n.era())
    }
    
    enum Year {
    	YearZero,
    	YearBC(u32),
    	YearAD(u32)
    }
    
    impl Year {
    	fn new(n: i32) -> Year {
    		if n == 0 {
    			Year::YearZero
    		} else if n < 0 {
    			Year::YearBC(-n as u32)
    		} else {
    			Year::YearAD(n as u32)
    		}
    	}
    
    	fn year(&self) -> u32 {
    		match self {
    			YearZero => 0,
    			YearBC(y) => y,
    			YearAD(y) => y
    		}
    	}
    
    	fn era(&self) -> u32 {
    		match self {
    			YearZero => "is not a year".to_string(),
    			YearBC(y) => "BC".to_string(),
    			YearAD(y) => "AD".to_string()
    		}
    	}
    }

    No more heap allocations! While this is obviously analogous, some might claim we haven’t actually “replaced conditional” at all, though we have at least contained the conditionals in a place where a type only knows about itself, and not about other things that might get passed in. Even if you accept adding match arms on self as “extension”, in terms of open/closed this requires a modification to at least the enum and the factory to add a new case, instead of just the factory as with the trait version.

    What is it about the enum version that allows us to avoid the boxing? Well, an enum knows what all the possibilities are, and so the compiler can know the size that needs to be reserved to store any one of those. With the trait case, the compiler can’t know how big the infinite world of possibilities that might implement that trait could be, and so cannot know the size to be reserved: we have to defer that to runtime and use a box. However, the factory will always actually return only a known list of trait implementations… can we exploit that to know the size somehow? What if we create an enum of the structs from the trait version and have the factory return that?

    enum YearEnum {
    	YearZero(YearZero),
    	YearBC(YearBC),
    	YearAD(YearAD)
    }
    
    impl Year {
    	fn new(n: i32) -> YearEnum {
    		if n == 0 {
    			YearEnum::YearZero(YearZero())
    		} else if n < 0 {
    			YearEnum::YearBC(YearBC(-n as u32))
    		} else {
    			YearEnum::YearAD(YearAD(n as u32))
    		}
    	}
    }
    
    impl std::ops::Deref for YearEnum {
    	type Target = Year;
    
    	fn deref(&self) -> &Self::Target {
    		match self {
    			YearEnum::YearZero(x) => x,
    			YearEnum::YearBC(x) => x,
    			YearEnum::YearAD(x) => x
    		}
    	}
    }

    The impl std::ops::Deref will allow us to call any method in the Year trait on the enum as returned from the factory, allowing this to effectively act as a trait object, but with no heap allocations! This seems like exactly what we want, but it’s a lot of boilerplate. Luckily, it’s very mechanical so creating a macro to do this for us is fairly easy (and I’ll throw in a bunch of other obvious trait implementations while we’re at it):

    macro_rules! trait_enum {
    	($trait:ident, $enum:ident, $( $item:ident ) , *) => {
    		enum $enum {
    			$(
    				$item($item),
    			)*
    		}
    
    		impl std::ops::Deref for $enum {
    			type Target = $trait;
    
    			fn deref(&self) -> &Self::Target {
    				match self {
    					$(
    						$enum::$item(x) => x,
    					)*
    				}
    			}
    		}
    
    		impl From<$enum> for Box<$trait> {
    			fn from(input: $enum) -> Self {
    				match input {
    					$(
    						$enum::$item(x) => Box::new(x),
    					)*
    				}
    			}
    		}
    
    		impl<'a> From<&'a $enum> for &'a $trait {
    			fn from(input: &'a $enum) -> Self {
    				&**input
    			}
    		}
    
    		impl<'a> AsRef<$trait + 'a> for $enum {
    			fn as_ref(&self) -> &($trait + 'a) {
    				&**self
    			}
    		}
    
    		impl<'a> std::borrow::Borrow<$trait + 'a> for $enum {
    			fn borrow(&self) -> &($trait + 'a) {
    				&**self
    			}
    		}
    
    		$(
    			impl From<$item> for $enum {
    				fn from(input: $item) -> Self {
    					$enum::$item(input)
    				}
    			}
    		)*
    	}
    }

    And now to repeat the first refactoring, but with the help of this new macro:

    fn format_year<Y: Year + ?Sized>(n: &Y) -> String {
    	format!("{} {}", n.year(), n.era())
    }
    
    trait Year {
    	fn year(&self) -> u32;
    	fn era(&self) -> String;
    }
    
    trait_enum!(Year, YearEnum, YearZero, YearBC, YearAD);
    
    impl Year {
    	fn new(n: i32) -> YearEnum {
    		if n == 0 {
    			YearZero().into()
    		} else if n < 0 {
    			YearBC(-n as u32).into()
    		} else {
    			YearAD(n as u32).into()
    		}
    	}
    }
    
    struct YearZero();
    
    impl Year for YearZero {
    	fn year(&self) -> u32 { 0 }
    	fn era(&self) -> String { "is not a year".to_string() }
    }
    
    struct YearBC(u32);
    
    impl Year for YearBC {
    	fn year(&self) -> u32 { self.0 }
    	fn era(&self) -> String { "BC".to_string() }
    }
    
    struct YearAD(u32);
    
    impl Year for YearAD {
    	fn year(&self) -> u32 { self.0 }
    	fn era(&self) -> String { "AD".to_string() }
    }

    We do still have two places with must be modified rather than extended (the macro invocation and the factory), but all other code can be written ignorant of those and in the same style as using a normal trait object. The normal trait objects can even be recovered using various implementations the macro creates, or even just by doing &* on the enum. Benchmarking these three styles on a somewhat more complex example actually found this last one to also be the most performant (though only marginally faster than the pure-enum approach), and the boxed-trait-object style to be more than three times slower.

    So there you go, next time you ask yourself if you want the flexibility of a trait or the size guarantees and performance of an enum, maybe grab a macro and say: why not both!

    Creative Commons Attribution 4.0 International License © 2006-2024 Stephen Paul Weber. Some Rights Reserved.