Search Results for "october 13"

DTNS 2515 – We’re Doomed

Logo by Mustafa Anabtawi thepolarcat.comVeronica Belmont and Roger Chang join to discuss Microsoft’s announcements of backwards compatibility and Minecraft for Hololens at E3. And is it truly the best lineup of Xbox games in history?

MP3

Using a Screen Reader? click here

Multiple versions (ogg, video etc.) from Archive.org

Please SUBSCRIBE HERE.

A special thanks to all our Patreon supporters–without you, none of this would be possible.

If you enjoy the show, please consider supporting the show here at the low, low cost of a nickel a day on Patreon. Thank you!

Big thanks to Dan Lueders for the headlines music and Martin Bell for the opening theme!

Big thanks to Mustafa A. from thepolarcat.com for the logo!

Thanks to our mods, Kylde, TomGehrke, sebgonz and scottierowland on the subreddit

Show Notes

Today’s guests: Veronica Belmont and Roger Chang

Headlines: 

Microsoft had their E3 press conference this morning. The crowd-pleaser was the announcement of backwards-compatibility fro Xbox 360 games on the Xbox One. Select titles will show up automatically if bought through Xbox Live or can be added by inserting a disc. A new Xbox Wireless Elite Controller was also announced coming in Autumn, no price. It is fully reprogrammable and even has swappable buttons and sticks. Windows 10 was announced as a platform for Valve VR and a version of Minecraft has been crated for HoloLens. Among the game announcements were Halo 5 Guardians coming 10/27, Rainbow Six: Siege October 13th, Rare Replay with 30 classic games for $30 August 4 and Rise of the Tomb Raider November 10. Also Cuphead, Dark Souls 3…. We’re going to talk more about this hang on.

Bethesda kicked off the pre-E3 press extravaganza last night. A new Doom, just called Doom is the first game on the new idTech 6 engine, has an accessible modding tool called Snapmap and will come to PS4, Xbox One, and PC in Spring 2016. Elder Scrolls: Legends is a strategy card game with a trailer very similar to Hearthstone’s, free to play on iPad and PC by the end of the year. Dishonored 2 is coming though we don’t know when. And Fallout 4 arrived November 10, with mods you can create on PC and transfer to Xbox One and a pip-boy app that can be best used inside a full-sized real-life Pip-Boy sleeve available in a collector’s edition. And a free to play Fallout mobile game called FalloutShelter has launched.

Facebook launched a new app called Moments that groups photos based on when they were taken and identifies who is in them. You can then choose to sync them with specific friends and vice versa. It can also group photos based on who is in them and let you search for photos of particular people. Moments launches today in the US on iOS and Android with more countries to follow over time.

Re/code has a breakdown of the revenue split for Apple’s Music. Apple executive Robert Kondrk, who negotiates music deals, says Apple will pay out 71.5 percent of the $10 a month subscription revenue from the US. Outside the US the percentage will be around 73%. That will be split up somehow among music owners (labels and publishers) one assumes based on plays. Apple however will not pay labels for rights to their music during the three month free trial which begins June 30th.

Venture Beat reports that Razer has acquired android game console-maker Ouya. Investment bank Mesa Global has confirmed the deal but Razer has not confirmed. VB says Ouya debt holders triggered the sale and would cost $10 million to buy out the debt holders. Razer has its own Android console called the Forge. Ouya has a library of 1124 android games including some exclusives and more than 40k plus developers.

The Next Web reports that Skype for web is now available worldwide. Skype’s web app works with IE, Chrome, Safari and Firefox on Windows and OS X as well as on Chrome OS and Linux. For now, you’ll need plugin to make calls, but in the future the web app will use Web RTC.

The New York Times reports IBM will commit hundreds of millions of dollars to developing Apache Spark, the open source project for real-time data analysis. Spark was developed at the Algorithms, Machines and People Lab at the University of California, Berkeley. IBM said it will put more than 3,500 of its developers and researchers to work on Spark-related projects, embed Spark in its data analysis software and offer Spark as a service.

Engadget reports Spotify has launched a site called spotify-tasterewind.com which analyzes your music library to recommend decade-specific playlists from the 1960s through whatever we call the last decade before this one. So for instance if I like Major Lazer, Wiz Khalifa and Pitbull, my 1970s playlist might have Bob Marley, The Isley Brothers and Julio Iglesias. Which is what happened for Tom.

News From You:

t2t2 informed us that the makers of Notepad ++ have left SourceForge. A blog post on notepad-plus-plus.org cites SourceForge’s several incidents where sourceforge bundled ad-ware into hosted open source projects, without notifying the owners and creators of the software. The post reads, “Such a shameless policy should be condemned, and the Notepad++ project will move entirely out of SourceForge.” The post encourages other project owners to also move off SourceForge.

tglass1976 sent us a Gizmodo article about the first prosthetic leg that can simulate sensation. A team at the University of Applied Sciences Upper Austria relocated a patient’s nerve endings closer to where the prosthesis connects, and connected the nerve endings to stimulators located in the prosthetic legs, which are then connected to six sensors on the sole of the prosthetic foot. When the sensors push against the ground, the nerve endings get a sense of feeling. The sense of touch makes the user safer and can help stop phantom limb pain.

Discussion Section Links:  

http://www.cnet.com/news/microsoft-xbox-e3-2015-press-conference/
 http://www.vg247.com/2015/06/15/xbox-one-now-backwards-compatible-with-xbox-360-games/
 http://arstechnica.com/gaming/2015/06/microsoft-unveils-new-xbox-one-elite-controller-and-weve-held-it/
 http://www.engadget.com/2015/06/15/xbox-game-preview/?ncid=rss_truncated
 http://techcrunch.com/2015/06/15/microsoft-reveals-dedicated-version-of-minecraft-for-hololens/?ncid=rss

Pick of the Day:

Devulu wanted to share this:

I found a beautiful website called http://species-in-pieces.com/.

The website was created by Amsterdam-based Designer Bryan James, who decided to push the limits of CSS’s animation capabilities while also building a platform for raising awareness of endangered species around the world. The result is “In Pieces”, an interactive catalog of 30 animals created entirely with CSS.

The animations are fascinating and it also raises awareness, how cool!

Works best in Google Chrome.

Messages:

Russell writes in:

On Friday’s show you were talking about Google being forced to block a website as part of a judgement by the Canadian court against a company called Datalink. The strange thing about this was the use of a private company, Google, to enforce a judgement. I am not a lawyer but it would seem the enforcement of a judgement would lay in the hands of law enforcement, the judicial system or the correctional system. This seemed like a very strange thing to do and felt a bit off. Wondering if there is a lawyer in the DTNS community who could shed some light on it.

Great to have seen the next Patreon goal met!!

Scott writes:

With Friday’s news/rumour that Blackberry may be working on an Android based phone I’m wondering if Nokia and Blackberry aren’t perfectly suited for a technical partnership of some sort.

Blackberry brings device management & security, with Samsung Knox nipping at it’s heals. As well as BBM, one of the largest messaging clients, fourth or fifth?

While Nokia brings solid mapping, which it really wants to become a viable smart phone alternative to Google Maps and/or Apple Maps.

​Both have a devote fan base, and I believe that both have moved away from producing their own hardware.​

​Perhaps a partnership of two drowning rats?​

=====

Tuesday’s Guests: Patrick Beja

 

Cordkillers Ep. 40 – I love you for the conditions we are in

Nielsen is inaccurate but HOW inaccurate? Also whether Amazon should join Ultraviolet. 

Download video

Download audio

CordKillers: Ep. 40 – I love you for the conditions we are in
Recorded: October 13, 2014
Guest: Derrick Chen

Intro Video 

Primary Target

Signal Intelligence

Gear Up

Front Lines

Under Surveillance

On our Radar

  • Young Ones
  • -YOUNG ONES is set in a near future when water has become the most precious and dwindling resource on the planet, one that dictates everything from the macro of political policy to the detailed micro of interpersonal family and romantic relationships

Dispatches from the Front

Just listened and wanted to point out that in the conversation about Kevin Smith Brian called Tusk a bomb /flop /don’t remember. However since he’s obviously a Smith podcast listener he probably knows but didn’t think about it, but Kevin has really moved away from the traditional money making methods in favor of more musician styled.

I don’t know for sure about the financing of the movie, but if it’s anything like the Super Groovy Cartoon Movie it’s probably mostly self financed. I know he’s planning on touring it to theaters with live performances, so ticket sales will hopefully make up the “traditional rocket sales” loss.

For example Super Groovy cost $69,000 to make, and was never really released to theaters. But with the tour it was paid for in the first few shows, and while I don’t know exactly what it brought in from what he’s said in podcasts I believe it’s something on the order of five million. Think of the pure profit from that with none of the marketing overhead.

It’s work, yes, but almost his own version of crowd funding… Think of it as interactive Patreon. Possibly something like that could be a vehicle for other well known creators to pay for projects they want to do but can’t get a green light.

-Derek in Chattanooga

PS. Brian is completely right, Myst was the streaming pile that Seventh Guest stepped over on its way to level ‘Awesome’ 

 

 

Hey Brian and Tom,

I’m the science teacher in Taylor whose email yall read on the last episode about Netflix offering channels that streamed the same content to everyone at the same time. I was working my Saturday part time job with Austin Moonwalks (Brian: hit me up if you want a deal for one of the girl’s birthdays!) when I heard it and about flipped out. Thanks guys, it was awesome to hear yall talk about it. I don’t expect you to revisit it on the show, but just to clarify: I think I overstated how much I cared about the “communal” experience of watching what everyone else was watching. I didn’t mean for that to be the main focus. That was more of a side-effect. For me, its more about the giving-up of control that I need. For example:

My favorite TV show growing up was Star Trek The Next Generation. I watched it at 9pm every night on FOX 42. (Do you remember before it was KEYE, Brian?). I didn’t get to decide what episode I watched. I watched whatever came on: good or bad, whether I liked it or not. Because THAT was the one that was on, and there was nothing I could do about it. Now, I have every episode of the series at my disposal, but I can’t pick one out to watch. It’s impossible! I even devised a randomizing system to pick one out for me, but even that didn’t quite work because I could still stop and change it if there was a part I didn’t like.

It’s not just TV shows. Do you guys remember before DVRs, just going through the channels and happening on a movie that you liked? Maybe you even had it on DVD or VHS, but hadn’t watched it in years. You could have pulled it out anytime and watched it, but you hadn’t and probably wouldn’t for years to come. But there was something about it being ON TV that made you stop changing channels and watch the whole thing.

That’s the feeling I’m talking about. Watching and ENJOYING something by chance, because that’s what was on, and there was nothing you could do about it.

If Netflix had a Sci-Fi “channel”, it could play movies, TV shows, or even documentaries (all of which came from what Netflix already has), and you could just put it on and watch what was there (knowing that other people were watching it too). Maybe I’d come across a TV show I never would have watched or a good movie I hadn’t seen in a long time and never would have picked-out even if it were suggested. If I don’t like what’s one the Sci-Fi station, I can click on the comedy station and see what’s there.

I guess some might call this “vegging out,” but that’s exactly what I need to do sometimes.

Anyway, sorry to write so much. Just wanted to make sure you understood what I meant, whether you agree or not.

-Andy (better known by 11 year olds as Mr. Morris)

 

 

Hey Brian and Tom,

I was listening to this week’s show and I had an idea. When you discuss the number of “bosses” you have and how to support the show on Patreon, I think you should call the segment “The TPS Report” (Total Patreon Supporters). You could do it with or without a fancy bumper since Tom usually leads in with a factoid from the relevant year but what will he do when you pass 2014 bosses after all? 🙂 Maybe a running gag about new cover sheets would be in order? Just a thought and I am also one of your bosses!

Thanks,

Tony Sheler
Albany, OR

 

 

Brian said a few times in the last episode that the chromecast is ‘open’. I’ve looked into developing for the chromecast and I want to say it definitely is not. If you want to make your app chromecastable you need to have your application approved and your application signing key signed by google. And there’s no way around this. It’s not like Android where you can check the ‘unknown sources’ box and do whatever you want. It is totally controlled.

This may be why the firefox stick could be better. If it’s truly open you may see things available there that you will never see on chromecast. Particularly I’m thinking porn and piracy apps like popcorn time, or even legally grey apps like grooveshark (an app which google has just banned from chromecast see http://thenextweb.com/apps/2014/09/09/grooveshark-longer-supports-chromecast-following-riaa-claim-infringes-artists-copyright ). That freedom and real openness might be just enough to give the firefox dongle an edge.

Clint Armstrong

Links

patreon.com/cordkillers
Dog House Systems Cordkiller box

About Virtual Korea

KALM-150x150"

The last 120 years or so have seen tons of change for Korea, but what does the future hold? Tom shares what one heir is doing with their legacy.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible. Become a supporter at the Know A Little More Patreon page for exclusive content and ad-free episodes.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to [email protected]

Episode transcript:

On the 5th of September 1905, in Portsmouth, New Hampshire, US President Teddy Roosevelt successfully mediated peace between Russia and Japan. The two Empires had been waging war in Manchuria since February 1904, focusing on control of the warm-water port of Port Arthur.

As one of the many provisions of the “Treaty of Portsmouth” Russia recognized Korea as part of Japan’s sphere of influence.

This did not go down well with the Joseon dynasty that had ruled Korea since the 1300s. Korea’s Joseon empire had slowly embraced a pro-Russian stance in opposition to Japan and to supplement support from China. Emperor Gojong had even spent a year in the Russian Legation in 1896 after the murder of his wife by a pro-Japanese faction.

The Emperor did not give up.

In 1907, Roosevelt prevailed on the nations of the world to hold a second “Hague Convention” in the Netherlands. The first had resulted in new rules of war and avenues to avoid it. This second conference included representatives from nations from every continent except Australia and Antarctica.

Emperor Gojong saw this as the last chance for Korea. He prepared three emissaries to secretly travel to the convention without Japan’s knowledge. Russia’s Tsar Nicholas II (yeah the one that would end his reign with the Bolshevik revolution) helped, by smuggling the emissaries into the conference hall without Japan’s knowledge.

But Japan found out. They objected to the emissaries entry on the basis that the treaty of 1905 gave them the right to represent Korea’s interests. The emissaries were turned away and did not get to plead their case in front of the countries of the world.

As a result, Japan deposed Emperor Gojong and installed his son, Sunjong as emperor and forced him to sign a succession of treaties that eventually resulted in the annexation of Korea by Japan in 1910.

If Emperor Gojong had been able to protect his secrecy a little more, who’s to say if his emissaries were succeeded. If only he had the equivalent of a good VPN. Something to keep spies from seeing his traffic.

No. This isn’t. Ham-handed transition to a sponsor. But the descendant of Emperor Gojong, and heir to the throne, did create Private Internet Access, one of the most well-reviewed VPN’s on the planet. And then recreated the Joseon empire. With blockchain.

Let’s help you Know a Little More about Virtual Korea.

Andrew Lee was a geek. The Indianapolis native had studied at Purdue and the University of Buffalo, but like many tech entrepreneurs, focused on his own passions rather than getting a degree.

As a fan of IRC – Internet Relay Chat, Lee wanted a way to secure his conversations. IRC revealed IP addresses, which was something nefarious folks could use to track you. Especially if you traded in torrents. To protect yourself you needed a VPN, but there were a lot of untrustworthy VPNs out there. So, in 2009, Lee started London Trust Media with the aim of taking VPN mainstream. And in 2010 London Trust Media founded Private Internet Access, an open source VPN provider. PIA focused on privacy with a no logs policy, a kill switch and decent speeds. Deloitte audited PIA in 2022 and found its server configurations were not designed to identify users.

Lee’s activities after the founding of PIA mostly read like a typical tech entrepreneur. He got into Bitcoin in the early days. He started a bitcoin price tracker in 2013 called Mt. Gox Live, which was eventually sold to the ill-fated Mt. Gox cryptocurrency exchange. He acquired Freenode IRC in 2017.

And in 2018 he gained another title. Crown Prince.

Let’s go back to 1910. Emperor Sunjong has once again acquiesced to Japanese demands, and this time sealed the fate of Korea’s Joseon empire. Sunjong signed the Japan-Korea annexation treaty, making Korea part of Japan.

In thanks, Japan demoted Sunjong to King and he died in 1926. His powerless title passed to his brother Yi Un. Another brother Yi Kang had seniority, but had married a commoner and was passed over.

Eventually the allies defeated Japan in World War II. The liberation of Korea in 1945 brought about a republic in the south. The monarchy was not restored. While North Korea uses the word Joseon in its official name, it also does not recognise a royal family.

However the people still lived. There are many descendants of the royal court and more than one of them claim to be the legitimate heir to the throne. A few of them even lobby for the creation of a constitutional monarchy, similar to what exists in the United Kingdom. One of the claimants to the throne is Yi Kang’s son Yi Seok.

Yi Seok has a colorful history himself. He was born in Sarong palace in 1941 in the waning days of Japan’s occupation of Korea. With the founding of the Republic of Korea in 1945, the imperial family was sent out of the palace. Yi Seok struggled but eventually found success as a singer. The “singing prince,” Yi Seok, had a hit album called Pigeon House in 1967. He fought for Korea in the Vietnam War. He immigrated to the US for a time and worked as a landscaper, then returned to Korea in the 1990s. He eventually began working for the city of Jeonju’s tourism department and as a professor of history at Jeonju University.

Here’s the connection to our main story. In 2006, Yi Seok founded the Imperial Culture Foundation of Korea in order to lobby for a constitutional monarchy. Since the death of his cousin Yi Ku in 2005, Yi Seok considers himself the head of the house of Yi and crown prince and heir to the Joseon throne.

A moment to be clear. No one agrees who the legitimate heir to the throne is. There is no imminent possibility of the throne being restored by the Republic of Korea. Yi Seok’s claim is disputed.

But the man knows how to steal a headline.

As Andrew Lee, founder of PIA VPN tells it, he was playing Super Smash Brothers when a distant relative named Won Joon Lee interrupted him with a visit. Won Joon Lee’s grandfather is Yi Seok. After a conversation and a look at some family photos, Andrew Lee was convinced to fly Yi Seok to LA and take him to some golf tournaments and celebrity galas. They bonded over their shared love of music. And in the end, Yi Seok proposed adopting Andrew Lee as his heir.

On October 6, 2018, at a ceremony in Beverly Hills, California, Yi Seok declared Andrew Lee the crown prince of Korea and heir to the throne.

In an interview with Korea IT Times, Andrew Lee explained “The Great King Sejong never wished for the Great Korean People to have restricted access to the Internet,” and announced plans to create an imperial fund to invest in small businesses in Korea and teach coding. “The family intends to educate the Great Korean People with web and software development,” Lee said.

And while the imperial family doesn’t have its great stores of wealth any longer, Andrew Lee has made his own.

He has those early Bitcoin investments and in 2019, Israeli-company Kape Technologies bought PIA for $95.5 million. Lee didn’t get all of that but he got a chunk. Enough that in 2020, he moved into a $12.6 million house in Thousand Oaks, California.

Andrew Lee has also continued Yi Seok’s tradition of music. In 2023, Lee appeared under the name KingLee, rapping on J-Money’s album, “Dun It All.”

And most importantly, King Andrew Lee has restored the Joseon dynasty. In March 2022, Lee founded Joseon 2.0, a cloud-based blockchain-operated successor of the Joseon dynasty. Its digital charter makes clear it has no territorial aims- so South Korea has nothing to worry about- but it does consider itself a virtual successor of the imperial kingdom founded in 1392. To bolster its legitimacy, Joseon 2.0’s charter claims that treaties signed with the original kingdom are in full force. That’s based on the fact that Emperor Gojong never agreed to be deposed and the treaties Gojong signed were perpetual. So if Andrew Lee is the heir to Gojong, and the Republic of Korea has no ties to the imperial family, then, King Andrew Lee asserts, his virtual kingdom is the legitimate successor of Gojong’s.

Joseon 2.0 has also established bilateral relations with Antigua and Barbuda. Joseon 2.0 says this makes it the first cybernation to be recognized by a UN member.

And of course there is a cryptocurrency called the Joseon Mun or JSM which has about $5 million of trading value at around a penny US per coin.

Many organizations have tried and fallen short to become a virtual nation. But none to my knowledge have claimed the centuries long tradition that Joseon 2.0 does. We’ll leave you to judge the legitimacy of the various claims to that tradition.

As for me, I hope you know a little more about Joseon 2.0.

저는 당신이 조선2.0에 대해 조금 더 알기를 바랍니다

CREDITS
Know A Little More is researched, written and hosted by me, Tom Merritt. Editing and production provided by Anthony Lemos and Dog and Pony Show Audio. The public key cryptography players were Sarah Lane as Alice, Shannon Morse as Eve and Andrew Heaton as Bob. It’s issued under a Creative Commons Share Attribution 4.0 International License.

About ALOHANet

KALM-150x150"

The backbone of nearly every network you use on a daily basis is based on a system designed to be cheap and easy to implement.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible. Become a supporter at the Know A Little More Patreon page for exclusive content and ad-free episodes.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to [email protected]

Episode transcript:

You know what an ethernet cable is don’t you? Even if you don’t think you do, I bet you do. It’s that cable you plug into a computer, or a router that delivers the internet. I know, so many people are doing WiFi these days so your laptop might not even have an ethernet port anymore. But I bet your modem does. Probably your game console does. Heck even your TV might have one.
For most of the internet’s history the ethernet cable has been the most reliable and fastest way to deliver internet traffic. And we owe it all to Hawaii.
Not what you were expecting me to say?
1968 was the year of the Mother of All Demos, but halfway across the Pacific Ocean one more piece of the future was taking shape on its own.
See, in 1968, the University of Hawaii was like a lot of other colleges. It had a big time-share computer. But the University of Hawaii had a unique issue. At Stanford, if you wanted to use the computer you walked over to the building where it was. In Hawaii, that wasn’t always practical. If you were studying on the island of Maui, it wasn’t a simple matter to walk over to the building on Oahau where the computer was.
Now even Stanford could have remote terminals in other buildings that connected to the main timeshare computer. That was also tough in Hawaii. It would be pretty expensive to run a cable through the ocean between Oahu and the Big Island or any other island.
That’s where Franklin Kuo comes in. Until recently, you wouldn’t recognise where he was born. But Wuhan, China has become famous for other reasons now. When Ku0 was 16 though he arrived in the United States and finished high school in New York City. He then got his bachelors, masters and doctorate in electrical engineering at the University of Illinois in Urbana-Champaign. He caught on at Bell Labs after graduation and worked there until 1966 when he made a fateful decision.
He flew halfway back to his birthplace to take a job at the University of Hawaii as a full professor. And there he met fellow electrical engineering professor Norman Abramson. Together they led a team that solved that problem of how to connect those University campuses on other islands to the timesharing computer on Oahu. And their solution ended up as the foundation of ethernet.
Let’s help you know a little more, about ALOHANet.

ALOHA net sort of stands for Additive Links On-line Hawaii Area. But that’s something of a contrived acronym. They just wanted to call it Aloha. The idea was to use low-cost, off the shelf radio equipment.
So they needed a system that didn’t rely on precision. In fact it had to be fault tolerant.
They decided to use packets of data, an idea borrowed from the ARPANET which was also under development at the same time. But this wasn’t about using the ARPANET, not yet. This was just sharing local data from the central computer to clients on other islands.
The hub was the central computer. It broadcast its packets out to everyone on the outbound channel. It wasn’t trying to target the receiver.
The clients on the other islands broadcast their packets on the inbound channel.
The outbound channel was pretty easy to manage. Everybody got everything and the local client would sort out which packets were meant for it and ignore the rest.
But the inbound packets could be a mess. What if two users on Maui and one on the Big Island all sent their packets at the same time. How would you handle that? The answer? Don’t!
Just acknowledge when you did get a packet. The hub would send an acknowledgement everytime it successfully received a packet from a client. If the client didn’t get that acknowledgement after a certain amount of time, it sent the packet again. Eventually every packet found a clear space in the transmissions and the packet made it through.
This was the main difference of ALOHANet from ARPANET. AROANET nodes could only talk to a single node at a time. So each node had to know if it was OK to talk or it would remain silent. ALOHANet didn’t need to handle giving clients permission to send data. Just keep sending data until it makes it through.
Since the nodes, the hub and the client, didn’t have to coordinate on when to talk to each other the protocol and the hardware could be much simpler. You just needed a separate frequency for outbound and inbound, that way the broadcasts and the acknowledgements from the hub weren’t competing with the incoming requests.
And the packets from the hub needed an address so the client would know if they were meant for them or not.
The first packet broadcasting unit went online in June 1971
That version, now called Pure ALOHA was incredibly simple. If you have data broadcast it. If there’s a collision re-send the data later. The determination of later relied on a lot of math involving where the clients and hub were and how far apart and how long it took them to create packets. That math determined the efficiency of the network. But it worked.
Slotted ALOHA was an improvement that increased the maximum throughput. Stations were given timeslots and could only start transmission at the beginning of the timeslot. You could still send data any time as there were lots of timeslots, but the arrangement reduced collision.
Reservation ALOHA improved efficiency more by reserving a slot for any client that successfully used it. Clients had to wait for an open slot and then reserve it by sending a pocket. Again there were enough slots this didn’t slow things down much and the reduction of collision speed things up.
The principles of ALOHANet went on to be used in satellites, mobile phone networks and WiFi.
But the first and arguably most well known of its uses was by Robert Metcalfe at Xerox PARC in 1973.
Metcalfe was working at Xerox PARC and finishing his doctoral thesis about ARPANet. Harvard had rejected his first draft. He read a paper about ALOHA Net and figured out how to fix a few of its bugs. He then included his bug fixes in his thesis and was accepted by Harvard.
That stuck with him, so when he and David Boggs were figuring out a standard for connecting computers over short distances, Metcalfe included some of the ways ALOHA Net handled collisions as they traveled through the wires.
Two years after ALOHANet went live, ethernet first functioned on November 11, 1973.
It was one of many innovations to come out of Xerox PARC in the 1970s, many of them furthering the work of Douglas Engelbart, and many of them conducted by folks who had worked with Engelbart on the Mother of All Demos. We’ll get into Xerox PARC in the future. Stay tuned.
But let’s get back to the ALOHANet
In October 2020, the IEEE presented the University of Hawaii at Mānoa, the location of the ALOHANet hub, with a plaque commemorating the network as an official IEEE Milestone.
It notes that ALOHANet was the first to demonstrate that communication channels could be effectively and efficiently shared on a large scale using simple random access protocols. You didn’t need permission to send your data, just send it when you want.
Without the need to share computer resources between campuses on multiple islands there would be no need to build ALOHA Net. And without ALOHANet we don’t get wifi, cell networks, ethernet and more.
In other words, I hope you Know a Little More about ALOHANet.

CREDITS
Know A Little More is researched, written and hosted by me, Tom Merritt. Editing and production provided by Anthony Lemos and Dog and Pony Show Audio. The public key cryptography players were Sarah Lane as Alice, Shannon Morse as Eve and Andrew Heaton as Bob. It’s issued under a Creative Commons Share Attribution 4.0 International License.

Spider-Man 2 Beats Records in its First 24 Hours – DTH

DTH-6-150x150October Apple event rumored, NASA patches Voyager bugs, Japan joins the Google investigation train.

MP3

Please SUBSCRIBE HERE.

You can get an ad-free feed of Daily Tech Headlines for $3 a month here.

A special thanks to all our supporters–without you, none of this would be possible.

Big thanks to Dan Lueders for the theme music.

Big thanks to Mustafa A. from thepolarcat.com for the logo!

Thanks to our mods, KAPT_Kipper, and PJReese on the subreddit

Send us email to [email protected]

Show Notes
To read the show notes in a separate page click here.

About Video Conferencing

KALM-150x150"

Video conferencing was a vital method of communication during the COVID-19 pandemic, but its roots go back much longer than you think.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible. Become a supporter at the Know A Little More Patreon page for exclusive content and ad-free episodes.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to [email protected]

Episode transcript:

Too soon? I know, but bear with me. Because among all the other unprecedented first time in history moments we experienced over the last few years, the boom in video conferencing was one of the biggest. Use of video conferencing to conduct meetings skyrocketed in 2020. With Zoom alone going from an estimated 10 million daily meeting participants in December 2019 to 300 million by April 2020.
You’d almost think Zoom invented video conferencing. But of course we all know that Skype predated Zoom. And Webex and others predated those. In fact, the roots of video conferencing go pretty far back.
That’s Douglas Engelbart demonstrating his version of video conferencing in the Mother of All Demos in 1968. Pretty far back right? It was one of the many elements of that demo that became a real product. Maybe faster than you think. AT&T launched the first true video-conferencing system on June 30, 1970. Which means I’m two days older than video conferencing. Anyone could subscribe to AT&T’s service in their home or office. If you had the money.
But the roots of video conferencing go even farther back. As soon as the telephone was patented in 1876, people began imagining telephonoscopes and electroscopes, and video telephones. Reality lagged a little behind their imagination.
Ernest Hummel was able to transmit still images using his Telediagraph as early as 1895. It was limited to images that could be made in shellac on foil, but it was something. By 1913, Édouard Belin’s Bélinographe used a photocell and by 1921, Western Union had launched the Wirephoto service which could transmit photos over phone lines. These took more than a minute per image so no video.
One-way video came along as television. But it took until 1930 for AT&T to develop a “two-way television-telephone” system. The systems were not terribly practical though, transmitting low resolution black-and-white video over telephone lines. And they were basically a series of still images not video.
AT&T was trying to figure out how to do this over its copper phone lines. But you didn’t have to use phone lines.
Dr. Georg Schubert developed the first public video telephone service using coaxial cable, the cylindrical cables most people are familiar with from cable TV. It launched March 1, 1936 connecting two closed-circuit televisions by coaxial cable in post offices in Berlin and Leipzig, Germany, about 160 km apart. It had 150 lines of resolution at 25 frames per second. 150p! And it worked. By 1938, Berlin, Leipzig, Hamburg, Nuremberg and Munich each had two video telephone booths in their main post office. If two people wanted to video call each other, they would each visit one of the booths at their post office at the same time. There were plans to expand further but those ended with the start of the war in 1939, and the system was shut down in 1940 so it could be used for telegraph and broadcast TV considered more essential to the war effort. A similar post-office based system was built in France in the 1930s as well.
Meanwhile AT&T kept working on videophones over telephone lines. The Picturephone Mod I used a small oval case on a swivel stand to house the screen. AT&T demonstrated The Mod I at the New York world’s fair in 1964 by making a video call to Disneyland in California. AT&T opened its first public videophone booths later that year, with First Lady, Ladybird Johnson doing the inaugural honors. In New York, Washington DC and Chicago, each participant in a call could reserve a time and visit the booth to make their call. Calls cost $16 to $27 for three minutes. That would be $150-$260 in 2023. It was too expensive. And the booths closed in 1968, the same year Douglas Engelbart demonstrated his computer-based video conferencing in the mother of all demos.
AT&T took up that cue and launched Mod II in 1970. This was more like a videophone. Anyone could be connected to the system, you didn’t have to visit a dedicated booth. Pittsburgh Mayor Peter Flaherty made the first Mod II video call on June 30, 1970 to Alcoa CEO John Harper. Service launched the next day, July 1, 1970 with 38 picture phones located across 8 companies in Pittsburgh. A set cost $150 to install and $160 a month to use and additional sets could be added for $50 a month each. You got 30 minutes of calling per month with extra minutes costing 25 cents a minute. Resolution was 250 scan lines of black and white video. Customers for the service peaked at 453 in early 1973 and it was discontinued later that year.
Compression Labs is often seen as an AT&T competitor that picked up that baton but it was even more expensive. In 1982, it launched the CLI T1, the first commercial group video conferencing system. It cost $250,000 to install and each call was $1,000 an hour.
And that still wasn’t digital video conferencing. It was still just a phone call with video shoehorned in.
To do digital you needed digital video compression and to do digital video compression you need math.
Anil K. Jain was born in India in 1946, as the war that had shut down Germany’s big videophone experiment ended. Jain received a degree in electrical engineering at the Indian Institute of Technology, in 1967 and a PHD from the University of Rochester in 1970, the year AT&T launched its videophone. Jain would develop the math that would mean you didn’t need these dedicated units. He worked on transform coding, image compression and block-based motion compensation for video compression. And video compression meant all you needed was a camera and the internet to do what these big expensive systems in Germany and at AT&T were trying to do.
By 1981, Jain was at the University of California at Davis and published a paper combining his block-based motion compensation with transform coding. That paper inspired two students at MIT, Brian L Hinman and Jeffrey G. Bernstein to work on a way to compress video so it could be used over the internet. By 1984 they, with their professor, David H. Staelin, they founded PicTel. Later renamed PictureTel to distinguish it from Pacific Telephone Company, aka PacTel. Its first product was a video codec, the C-2000, the first commercial implementation of a compressor/decompressor of its kind.
Building on Jain’s math, C-2000 analyzed the motion between frames meaning it could work with much less data than an algorithm that treated each frame of video independently. Remember all those slowly transmitted still images of the early 20th century? To oversimplify the C-2000 let you get by on fewer still images and make up what came between so it looked like smooth video motion. In practice that meant you could do video over a 128-bit-per-second ISDN line instead of needing fixed location lines.
Most video compression standards for two-way video are based on this motion compensation and transform coding way of doing things, including the pervasive H.264 codec. PictureTel marketed its codec and eventually used it in its own software, including LiveSharPlus for Windows 3.1 And PictureTel did well. In fact AT&T used it for an international video conference in 1989. Hinman went on to found a separate teleconferencing company called Polycom in 1990, and Polycom bought PictureTel in October 2001.
The next step for video conferencing was the camera. The pioneers in this were motivated by coffee.
Yes the British drink coffee. And at the Cambridge computer lab in 1991, the coffee machine was in a separate room. Many were the agonies of a computer lab user trudging all the way to the Trojan coffee room, only to find it empty.
Quentin Stafford-Fraser and Paul Jardetzky had a solution. They connected a video capture card to an Acorn Archimedes computer and a 128 x 128 greyscale camera and pointed it at the coffee pot so you could check if it was worth the trip. At first it was delivered to the local network but in 1993, web browsers gained the ability to display images, so the camera was connected to the internet to make it easier to access on any computer in the lab… and the world. Daniel Gordon and Martin Johnson made it available over HTTP, thus making it the first webcam.
But that was still not two-way video. So here we go again. Another technician in a college saves the day.
Tim Dorcey worked in IT at Cornell University and used the new video codec and Internet Protocol system to write CU-SeeMee for the Mac in 1992. You could put it on any machine and connect to any other machine that used it over the internet. That way you didn’t have to set up a server for your video. Just install the software and start calling! It only did video at first but added audio in 1994 thanks to Maven, a client developed at the University of Illinois.
A National Science Foundation project called Global Schoolhouse made CU-SeeMee available to the public on April 26, 1993. CU-SeeMe was used by WXYC radio in Chapel Hill, North Carolina, to simulcast its radio broadcast on the internet, making it the first internet radio station.
ABC’s World News Now became the first TV program to stream live on the Internet on November 23, Thanksgiving morning, 1995.
And that’s really the last step. Some innovative math and a smart codec implementation meant you didn’t need huge specialized machines, just a server, a bit of coffee-motivated ingenuity meant you didn’t need a big expensive camera, and some clever software coding meant you didn’t even need the server. From here on video conferencing exploded.
Microsoft entered the game with NetMeeting and as bandwidth increased, the amount of video conferencing software from Skype to GoToMeeting and beyond, increased with it.
So much so, that Eric Yuan had a hard time getting funding for his startup Saasbee when he left Cisco in 2011. Not because of the name, but because everybody thought the market was saturated. Yuan eventually prevailed on a few folks and in May 2012, reportedly influenced by the children’s book Zoom City, he changed the company’s name to Zoom. It took a few years but things worked out for Yuan.
And now many of us work, using Zoom, or some other kind of video conferencing technology. I hope this gives you some of the historical perspective of how we got to this world of working from home and Zoom fatigue.
In other words, I hope you Know a Little More about Video Conferencing.

YouTube Traffic Signals Problems – DTNS 4595

October 1st, Microsoft will be separating Teams from the Microsoft 365 and Office 365 suites in the European Economic Area and also Switzerland to head off antitrust regulators. Plus How is a YouTube invalid traffic bug causing problems for YouTubers. Tasia Custode from “AI Named This Show” explains. And the Philips Hue Line Gains Smart Cameras and Sensors.

Starring Sarah Lane, Robb Dunewood, Tasia Custode, Roger Chang, Joe.

MP3 Download


Using a Screen Reader? Click here

Download the (VIDEO VERSION), here.

Follow us on Twitter Instgram YouTube and Twitch

Please SUBSCRIBE HERE.

Subscribe through Apple Podcasts.

A special thanks to all our supporters–without you, none of this would be possible.

If you enjoy what you see you can support the show on Patreon, Thank you!

Become a Patron!

Big thanks to Dan Lueders for the headlines music and Martin Bell for the opening theme!

Big thanks to Mustafa A. from thepolarcat.com for the logo!

Thanks to our mods Jack_Shid and KAPT_Kipper on the subreddit

Send to email to [email protected]

Show Notes
To read the show notes in a separate page click here!


About RSS

KALM-150x150"

The story of RSS is simple and yet combative. In fact RSS’s success may hinge on one man’s idealistic dedication to his principles. Tom takes you through the history of RSS.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to [email protected]

Episode transcript:

You probably use an RSS feed. In fact if you got this episode as a podcast you definitely used an RSS feed. Most people these days don’t even know they’re there. The story of RSS is simple and yet combative. In fact RSS’s success may hinge on one man’s idealistic dedication to his principles. If you’ve ever thought “why are people making this so complicated?” If you’ve ever wondered what it would be like to be a person who just shut everyone up with an action that for right or wrong would stand the test of time. Get ready to Know a Little More about RSS.

People say RSS stands for Really Simple Syndication though it really doesn’t. That’s one of the charms of the story of RSS. Throughout its formative years nobody could agree on much and the name is still a matter of debate to this day.
If you’ve heard of RSS at all, it was most likely in connection with Podcasts. Podcasts are delivered through RSS feeds to the apps and platforms where you can listen to them. Behind every Apple Podcast, Google Podcast, Audible Podcast and even most Spotify podcasts, there’s a simple RSS feed. You may also use RSS as a feed for headlines. If you use Feedly, NewsBlur or Inoreader or something like that you’re using RSS.
But where did RSS come from? Oh my friends. Be prepared for a tale of idealism, abandonment, betrayal and perseverance. It is the tale of RSS.
In the earliest days if you wanted to know if a website had been updated you had to visit it. As websites became more common this became a chore. So people experimented with ways to let you know when a website had been updated, without you having to go there. One of the earliest attempts at this was the Meta Content Framework or MCF, developed in 1995 in Apple’s Advanced Technology Group.
Ramanathan V. Guha was part of that group and a few years later, he moved over to browser-maker Netscape, where he and Dan Libby kept working on these sorts ideas. Guha particularly liked developing Resource Description Frameworks, or RDFs, similar to the old MCF he worked on at Apple. They were complex ways to show all kinds of things about web pages without having to visit them.
But Netscape’s team was of Guha, Libby and friends was not alone. And early on they weren’t he most likely to succeed. The Information and Content Exchange standard, or ICE, was proposed in January 1998, by Firefly Networks — an early web community company– and Vignette- a web publishing tool maker. They got some big names to back ICE too. Microsoft, Adobe, Sun, CNET, National Semiconductor, Tribune Media Services, Ziff Davis and Reuters, were among the ICE authoring group. But it wasn’t open source. In those days respectable tech companies like those I just named, still cast a skeptical eye on open source code. How were you supposed to make money on it? Who would keep working on it if they weren’t paid? So the members of the ICE authoring group paid people to develop it. And in the end that meant it developed slower than competing standards.
Interestingly, ICE’s failure caused Microsoft to get a little more open, a little earlier than you might expect. In 1997 Microsoft and Pointcast created the Channel Definition Format, or CDF. They released it on March 8, 1997 and in order not to fall under the death by slow development that ICE seemed to, they submitted it as as standard to the W3C the next day.
It was adopted quickly and in fact its success planted the seed of its successor. Dave Winer had founded a software company in 1988 called UserLand. UserLand added support for CDF on April 14, 1997 one month after its release. Winer also began publishing his weblog, Scripting News in CDF. But CDF, like ICE, was more complicated than a smaller site needed. So on December 27, 1997, Winer began to publish Scripting News in his own scriptingNews feed format as well. He just simplified CDF for his own needs and made that available for anyone who wanted to use it to subscribe.
Meanwhile Libby had been working away at his own version of a feed platform and Netscape was about to make a big launch that would cause his project to surpass them all. On July 28, 1998, Netscape launched My Netscape Portal, This was one of the earliest Web Portals. A place that aggregated links from sites around the Web. You could add sites you wanted to follow, like CNET or ZDNet and then see their latest posts all in one place.
Netscape kept the links updated with a set of tools developed by Libby. He had taken a part of an RDF parsing system that his friend Guha had developed for the Netscape 5 browser, and turned it into a feed parsing system for My Netscape. He called it Open-SPF at the time, for Site Preview Format.
Open-SPF let anyone format content that could then be added to My Netscape. It was rich like CDF, open like CDF but had one advantage over CDF. It worked on My Netscape, which suddenly everyone wanted to be on.
Netscape provided it for free because that meant the company didn’t have to spend time reaching deals for content. You want your content on My Netscape, use Open-SPF, it can be there. That meant there was more content available for My Netscape than was usual on curated pages. The content was free for both the users and Netscape. More content meant more users and more users meant Netscape could serve more ads. And content providers were willing to create the Open-SPF feeds, because they weren’t burdensome to create and the sites got more visitors who saw their content on My Netscape and clicked on links to come to their sites.
Sound familiar? This arrangement is the one Google still tries to rely on for Google News. Except the news publishers have changed tunes. Back then they were all about bringing visitors to their websites and happy that Netscape sent folks their way for free.. But as the years have passed and revenue has shrunk, now they’re more about getting Google to pay them for linking to their news.
Anyway back to the rise of Netscape.
1999 is not only the end of the millennium. It’s not only when everyone actually got to party the way Prince had been asking them to pretend to party. 1999 was a huge year for RSS. It was about to reach its modern form and become something users of RSS today would recognise. By name.
On Feb. 1, 1999 Open-SPF was released as an Engineering Vision Statement for folks to comment on and help improve.
Dave Winer commented that he would love to add Scripting News to My Netscape but he didn’t have time to learn Netscape’s Open-SPF. However because he had his own self-made feed format using XML he’d “be happy to support Netscape and others in writing syndicators of that content flow. No royalty necessary. It would be easy to have a search engine feed off this flow of links and comments. There are starting to be a bunch of weblogs, wouldn’t it be interesting if we could agree on an XML format between us?”
However by Feb. 22, Scripting News was publishing in Open-SPF and available at My Netscape. Feeling like it was a success, Libby changed the name of Open-SPF to refer to the fact that it used RDF, calling it the RDF-SPF format and released specs for RDF-SPF 0.9 on March 1. Shortly after release he changed the unwieldy name to RDF Site Summary, or RSS for short. Thus begins the first in a parade of meanings for RSS
And the new name took off. Carmen’s Headline Viewer came out on April 25th as the first RSS desktop aggregator and Winer’s my.UserLand.com followed on June 10th as a web-based aggregator.
Folks liked the idea obviously, but a lot of RSS enthusiasts thought the RDF was too complex, Dave Winer among them. Libby hadn’t ignored Winer’s earlier offer either. In fact, Libby thought they weren’t really using RDF for any useful purpose. So he simplified the format adding some elements from Winer’s scriptingNews, and removing RDF so it would validate as XML. This was released on July 10, 1999 as RSS 0.91.
Some folks write that the name changed to Rich Site Summary at that point but Winer wrote at the time “There is no consensus on what RSS stands for, so it’s not an acronym, it’s a name. Later versions of this spec may say it’s an acronym, and hopefully this won’t break too many applications.”
Anyway by 1999, like Toy Story, RSS is on a roll. Libby is bringing in feedback from the community and creating a workable usable standard that is reaching heights of popularity beyond just the confines of My Netscape.
Like some kind of VH1 Behind the Music story, as it reach that’s height, everything fell apart.
Netscape would never release a new version of RSS again.
In the absence of Netscape’s influence, two competing camps arose.
Rael Dornfest wanted to add new features, possibly as modules. That would mean adding more complex XML and possibly bringing back RDF.
Dave Winer preached simplicity. You could learn HTML at the time by just viewing the source code of a web page. Winer wanted the same for RSS.
On August 14, 2000, the RSS 1.0 mailing list became the battleground for the war of words between the two camps.
Dornfest’s group started the RSS-DEV Working Group. It included RDF expert Guha as well as future Reddit co-founder Aaron Swartz. They added back support for RDF as well as including XML Namespaces. On December 6, 2000 they released RSS-1.0. and renamed RSS back to RDF Site Summary.
Not to be left behind, two weeks later On December 25, 2000, Winer’s camp released RSS 0.92.
Folks, grab your steaks knives. We have a fork.
In earlier days, Libby, or someone at Netscape, would have stepped in. In But AOL had bought Netscape in 1998 and had been de-empahasizing My Netscape. They wanted people on AOL.com. And if they didn’t care about Netscape, they cared even less about RSS. In fact they actively did things that could have ended RSS. In April 2001, AOL closed My Netscape and disbanded the RSS team, going so far as to pull the RSS 0.91 document offline. That document was used by every RSS parser to validate the feeds. Suddenly all RSS feeds stopped validating. Apparently this had little effect on visitors to AOL.com or people dialing in to their internet connection, so AOL just let them stay broken. With the RSS team gone and AOL doing nothing, RSS feeds were looking dead in the water.
But the RSS 0.91 document was just a document after all. And there were copies. Anybody theoretically could host it as long as everyone else changed their feeds to validate to the new address. Dave Winer stepped up.
Winer’s UserLand stepped in and published a copy of the document on Scripting.com so that feed readers could validate. That right there won Winer a lot of good will.
An uneasy truce followed. Whether you were using Netscape’s old RSS 0.91, Winer’s new RSS 0.92 or the RDF Development Group’s RSS 1.0 they would all validate.
By the summer of 2002, things are going OK and tempers have cooled. Nelly has a hit song advising folks what to do if things get hot in here. Maybe we can solve this? Let’s try to merge all three versions into one new version we can all agree on and call it RSS 2.0. right?
Except they couldn’t agree. Winer still wanted simplicity. RDF folks still wanted RDF and the fun features it would bring. They would agree to a simplified version of RDF but they still wanted it. To make matters more confusing, Winer was discussing what should happen by blog, with everyone pointing to their own blogs. The RDF folks were talking about it on the rss-dev mailing list.
Communication, oddly in a discussion about a communication platform, was the problem. Since neither side was seeing each other’s arguments they never came to an agreement. So Winer’s group decided not to wait. On September 16, 2002, UserLand released their own spec and just went and called it RSS 2.0. AND Winer declared RSS 2.0 frozen. No more changes.
Discussions continued on the RSS-dev list but Winer’s camp got another victory when in November 2002, the New York Times adopted RSS 2.0. That caused a lot of other publications to follow suit. Further consolidating the position.
The next year in another move fending off the debate, on July 15, 2003, Winer and UserLand assigned ownership of RSS 2.0 copyright to Harvard’s Berkman Center for the Internet & Society. A three-person RSS Advisory board was founded to maintain the spec in cooperation with the Berkman Center which continued the policy of considering RSS frozen. Mic. Dropped.
There was still a resistance. IBM developer Sam Ruby set up a wiki for some of the old RDF folks, and others, to discuss a new syndication format to address shortcomings in RSS and possibly replace Blogger and LiveJournal’s protocols. The Atom syndication format was born of this process and was proposed as an internet official protocol standard in December 2005. Atom has a few more capabilities and is more standard compliant, being an official IETF Internet standard, which RSS is not. But in practice they’re pretty similar. Atom’s last update was October 2007 but it is still widely supported alongside RSS.
And RSS 2.0 kept going. In 2004 its abilities to do enclosures, basically point to a file that could be delivered along with text, led to the rise of Podcasts. Basically RSS feeds that pointed to MP3 files.
In 2005, Safari, Internet Explorer, and Firefox all began incorporating RSS into their browser’s functions. Mozilla’s Stephen Hollander had created the Web Feed icon, the little orange block with a symbol like the WiFi symbol at an angle. It was used in Firefox’s implementation of RSS support, and eventually Microsoft and Opera used it too. It was also used for Atom feeds. Stephen Hollander did what most could not. Get people interested in providing automated Web feeds to agree on something.
And in 2006, with Dave Winer’s participation, RSS Advisory Board chairman Rogers Cadenhead relaunched the body, adding 8 new members to the group in order to continue development of RSS.
Peace in the form of an orange square was achieved.
OK. So RSS has a colorful history. What the heck does it do?
That part is pretty simple. It’s a standard for writing out a description of stuff so that it’s easy for software to read and display it.
Basically you have the channel (or Feed in Atom) and Items (or entries in Atom).
RSS 2.0 requires the channel to have three elements, the rest are optional. So to have a proper feed you need a title for your channel, a description of what it is and a link to the source of the channel’s items.
Like Daily Tech News Show – A show about tech news. And a link to dailytechnewsshow.com
Optional elements of RSS are things like an image, publication date, copyright notice, and even more instructions like how long to go between checking for new content and days and times to skip checking.
The items are the stuff in the feed. There are no required elements of an item, except that it can’t be empty. It has to have at least one thing in it. So an item could just have a title or just have a link. However most of the time an item has a title, a link and a description. The description can be a summary or the whole post. Other elements of the item include author, category, comments, publication date and of course enclosure.
So for our Daily Tech News Show example title might be Episode 5634 – AI Wins, the description might be “Tom and Sarah talk about how AI just won and took over everything.” And the link to the post for that episode.
The enclosure element lets the item point to a file to be loaded. The most common use for the enclosure tag is to include an audio or video file to be delivered as a podcast.
For Daily Tech News Show that would be a link to the MP3 file.
In the end an RSS reader or a podcast player looks at an RSS feed the way your browser looks at a web page. It sees all the titles, links descriptions and possible enclosures, and then loads them up and displays them for you.
After a rather stormy opening decade, RSS has settled down into a reliable and with apologies to team RDF, simple way of syndicating info. Really Simple Syndication indeed.
Like podcasting which it provides the underpinnings to, RSS has been declared dead several times. But it just keeps on enduring. I hope you have a little appreciation for that tiny file that delivers you headlines and shows now. In other words, I hope you know a little more about RSS.

About Bluetooth LE Audio

KALM-150x150"

Tom breaks down the Bluetooth LE Audio profile, why it’s needed, and when and where you can expect to see it in your devices.

Featuring Tom Merritt.

Sites mentioned during this episode:
Plugshare
A Better Route Planner
SMR Podcast
BBQ and Tech

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to [email protected]

Episode transcript:

I already have Bluetooth and I have to say I’m not that impressed.

But now they say there’s Bluetooth LE Audio for my music?

Should I trust this?

Confused?

Don’t be. Let’s help you Know A Little More about Bluetooth LE Audio.

Bluetooth LE Audio is an implementation of Bluetooth LE with a focus on audio quality. So what is Bluetooth LE you might ask.
Bluetooth LE stands for Bluetooth Low Energy. It’s technically separate from the regular Bluetooth spec but it’s also administered by the Bluetooth Special Interest Group ( you can listen to our episode on Bluetooth 5 for more on that. Bluetooth LE uses the same frequency as Classic Bluetooth, 2.4 Gigahertz, and it can share a radio antenna, so the two specs are often implemented together. In other words your phone, Bluetooth speaker or headphones might have one or both. But your earbuds are the most likely to have Bluetooth LE in order to save on battery life.
Let’s take a quick dip into where it came from. Nokia researchers adapted Bluetooth in 2001 for low power use and in 2004, published something they called Bluetooth Low End Extension. Continued development with Logitech and other companies including STMicroelectronics led to a public release under the name Wibree in October 2006. The Bluetooth SIG agreed in June 2007 to include Wibree in a future Bluetooth spec, but sadly did not agree to keep the name. It was integrated into Bluetooth 4.0 as Bluetooth Low Energy and marketed as Bluetooth Smart. The iPhone 4S was the first smart phone to support it in October 2011. The Bluetooth Smart name was phased out in 2016. It’s now just called Bluetooth LE, currently grouped under Bluetooth 5. So yes it’s *technically* not the same but it essentially works like a mode of classic Bluetooth. (Pause for shouting of people who say it’s not like that at all, really).
Bluetooth SIG defines profiles for both Bluetooth Classic and Bluetooth LE, basically a definition of how it works for a particular application. One of the LE profiles is the Mesh profile which lets LE devices forward data to other LE devices to create a mesh network. There are a lot of profiles including battery, proximity sensing, internet connectivity, and… tada! Audio.
Since for many folks, Bluetooth means headphones and speakers, the official publication of the Bluetooth LE Audio profile got a lot of attention.
Bluetooth LE Audio’s protocol defines features that expand what low-power devices can do, specifically for audio.
Some of what Bluetooth LE Audio can do already exists. Qualcomm’s aptX Adaptive or Sony’s LDAC codecs offer high quality audio compression and low latency. You just need to pay Qualcomm or Sony to use them. Or you could engineer your own proprietary solution. Which costs you the time to research and develop it.
But you don’t need any of that anymore.
Bluetooth LE Audio will support up to 48 kHz, 32-bit audio at bitrates from 16 to 425 Kbps, with 20-30millisecond latency versus Bluetooth Classic’s 100-200 millisecond latency, all while going easy on your battery. And it costs a manufacturing company exactly nothing to implement. The magic of industry standards.
Instead of LDAC or aptX, Bluetooth LE Audio uses the LC3 codec. It can deliver higher quality audio at the same bitrate as Bluetooth Classic’s SBC codec and SIG claims it can do better audio than SBC at half the bitrate. That means higher quality audio will use less power.
There’s also a feature called Auracast which lets unlimited audio devices – called “sinks” in the parlance – but we’re talking about speakers and headphones what have you – connect to a single audio source. For instance everybody in the gym could connect their wireless headphones to a single TV. Everybody in a theater could wear earbuds to get improved movie audio. Users can select Auracasts like they would a WiFi network. Depending on how the OS handles them they’ll probably show up as a little list of ‘Auracast Broadcasts” and you would select from the list the one you want to hear. Auracast also supports connections by QR code and NFC, so that may be an option sometimes too. And yes, Auracasts can be password protected if you just want to share with friends, and those can show in the list with a lock.
Here’s another example SIG gives, airport gate announcements. Let’s say you’re at gate C17. There could be an auracast just for gate C17 and then a password-protected one for the gate agent. That way the gate agent hears the airport announcements meant for them and you hear just the announcements for your gate and don’t get confused by that “board now” announcement from C18 next door.
Now you may be wondering how you’ll see that list of Auracasts on your small earbuds. You’ll need an “assistant” most often a smart phone- though I suppose it could be a laptop or tablet or some such thing. On the “assistant” device you select the broadcast. The assistant then passes on the information to connect to the headphones or speaker which then connect to the broadcast directly. You won’t be dragging down the battery of the assistant device after that.
A few other notable features.
Bluetooth LE Audio also lets each earbud maintain its own connection with the source device. Right now with Bluetooth, only one earbud connects to the source device and then somehow passes along the audio to the other earbud. This is a tricky thing since the head blocks most wireless connections so you have to find a way around it. That’s why the first wireless headphones always wired the two earphones together. Apple lets each earbud in its airpods make a direct connection to an iPhone but that method is proprietary. With Bluetooth LE Audio, more earbud makers can do it as part of the spec. Word is Apple will adopt Bluetooth LE Audio as well, whether they use it for this feature or not.
This should also make it easier to switch between audio source devices.
Bluetooth LE Audio is also better at managing packet loss when you’re at the edge of the range. Bluetooth LE– without the Audio profile– tries to make sure every packet arrives. And if it can’t, it terminates the connection and then reconnects and starts over. This is a good thing when you want to get every bit in a file. For audio streams though, it means your audio cuts out more when you’re at the edge of the range. Bluetooth LE Audio, since it’s specific to audio, takes a different approach to packets. It limits the time a packet has to to be retransmitted so that once it’s too old to matter, you know it’s the “oooh-we-oo” from the last verse– it gets tossed aside and doesn’t cause the stream to be interrupted. The new LC3 audio codec can compensate for this packet loss so you don’t hear skipping. And it should work the way Qualcomm’s aptX Adaptive or Sony’s LDAC codecs have worked over Bluetooth up until now, by reducing the quality a little until the connection gets stronger. So basically instead of skips and dropouts at the edge of the range you may just get slightly tinnier sound.
And praises be, Bluetooth LE Audio supports hearing aids and implants. A huge benefit for devices that really do need battery efficiency. Bluetooth SIG expects most new phones and TVs to be hearing loss accessible within the next few years.
So how can you get it?
Some devices may be able to support Bluetooth LE Audio with a software upgrade so check with your manufacturer to see. In fact Android 13’s beta supports it already. But will they? Maybe? However, maybe not. Your best bet is to check for new devices to see if they support it.
You should be able to find more and more products supporting Bluetooth LE Audio over time. In other words, I hope you know a little more about Bluetooth LE Audio.

About QR codes

KALM-150x150"

Tom explores the history, usage, and possible dangers of QR Codes.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to [email protected]

Transcript:

I went to a restaurant and they said their menu was a little box full of boxes.
How am I supposed to read that.?
Someone said point my phone at it?
Confused?
Don’t be, let’s help you Know A Little More about QR Codes.

The “QR” in QR code stands for Quick Response code. It was invented by Masahiro Hara of the Denso Wave subsidiary of Japan’s Denso automotive parts company in 1994. He was inspired by the black and white patterns created when playing the game Go. The original application of the QR code was to identify parts in auto manufacturing at high speed.
The QR code is a type of 2D or matrix barcode, as opposed to the widespread UPC bar code you see a lot of, which is considered a 1D bar code. A 1D bar code is read in one dimension. So with UPC a laser horizontally the series of varying widths of black and white bars. Whereas a 2D barcode is read vertically and horizontally and uses rectangles, dots, hexagons and other patterns.
The big advantage of a 2D bar code is it can hold more information and deliver it quicker than a 1D bar code.
A QR code uses black squares called data modules arranged in a square grid on a white background. The background should extend outside the square in what’s called a “quiet zone” to make it easy to detect what’s actually part of the QR code’s matrix. You can encode four standard types of input data or “encoding modes:” numeric, alphanumeric, byte/binary and kanji.
The maximum amount of information you can encode depends on which of these inputs you’re using as well as your level of error correction and the dimensions of the grid. Grid dimensions are described by a level number from 1- 40. With level 1 having 21 by 21 data modules and each level adding 4 until you get level 40 with 177 x 177 data modules.
Maximum capacity can be found with the 40-L numeric encoding which encodes just numbers at the maximum dimensions of the grid with the lowest error correction. It can hold 7089 characters. The Alphanumeric version of the same thing holds 4,296 characters. Most QR codes you see in everyday life are around versions 2-5 and usually hold between 20 to 100 characters, enough for a shortened URL.
Because a QR code is two dimensional you need an image sensor to detect it. Since almost every phone now has a camera, the phone has become the most familiar way QR codes are scanned.
A Reed-Solomon error correction process is used to interpret the pattern. Reed-Solomon is also used in CDs, Blu-ray Discs, DSL and RAID 6. In QR codes there are four levels of error correction L is the lowest restoring approximately 7% of data, M is the middle at 15%, Q is the next up at 25% and H is the highest at 30%. This is going to offend statisticians and data professionals but you can roughly think of it as if up to 7% of the data is damaged the L error correction will still let you read the data. In practice most QR codes seem to use M. I guess they assume if more than 15% of that sticker is damaged you might as well get a new menu sticker.
But let’s get into how that pattern of blocks gets turned into your restaurant menu or wifi password or name of a conference room or whatever. The whole QR code is made up of just those blocks, called data modules, either black squares or empty white spaces.
You might have noticed there are always three distinctive larger squares in the corners of a QR code, Those are position markers. They are used along with a smaller square or set of squares in the fourth corner to calibrate the size, orientation and angle in which the pattern is being viewed.
Now your QR code reader, likely your phone’s camera, knows where the code is and can adjust for how big it looks in your camera. It can even do these adjustments on the fly as your unsteady hand wavers over the restaurant table.
Next it needs to know some things about what kind of encoding and error corrections and such were used. This way it can interpret the data correctly.
The mode indicator is placed in the bottom right indicating the input type. Other format information like error correction quality and character count is placed near the three squares. These are done as a sequence of 4 bit indicators.
That stuff is always the same and lets the reader know whether to look for numbers, alphabets kanji whatever and how much will be redundant error correction code
Now it’s time to read the whole point of this exercise. The data. The thing. The link to the menu. The kind of auto part this is. The WiFi Password!
In the space remaining after the position markers and format data, the encoded data is placed from right to left in a zigzag pattern until it reaches an end indicator. The amount of bits used for your data varies by the type of input. So numbers can get 3 digits into 10 bits, alphanumeric gets 2 characters into 11 bits and so on. You can even switch encoding types if you need to. Just throw in another 4-bit indicator.
You often need to mix input types because alphanumeric can only do capital case and 8 punctuation marks. So to do anything beyond that you need to use bytes which takes up more bits.
And that’s it, once the reader has interpreted all that it has the data and then the reader goes from there whether that’s showing you a URl you can tap or a wiFi password you can enter or the name “brake pad.”
You may wonder who keeps track of how that all works so that every reader works with every QR code.
QR codes have been standardized multiple times over the years. The first time was in October 1997 issued by the Association for Automatic Identification and Mobility, followed by one in January 1999 from JIS or Japanese Industrial Standards. And then the heavy, the International Standards Organization, or ISO issued its first standard in June 2000 and most recently updated it on February 1st, 2015.
Denso freely licenses QR code tech as long as users follow either the JIS or ISO standards. While Denso holds patents on the technology, it waived its rights for standardized codes and its patents in the US and Japan have already expired.
Denso does still hold the trademark on the name QR code and maintains some proprietary, non-standard implementations. But the ones you mostly see are standards-compliant.
You probably figured this out but QR codes are static. Once they’re printed, they don’t change. Even if you made an animated GIF of a QR code, the reader would just keep trying to show you the latest one. Once you make a QR code it’s meant to stay that way. Which makes them great for permanent information, which is why they were very good at parts identification. This is a shock absorber and we have very little expectation it will suddenly become a brake pad so we can slap a QR code on it so the assembly robots know what it is.
However at some point folks had the bright idea to encode URLs into QR codes. Why not? URL’s are just alphanumeric strings after all. Now, the URLs are still static. But any URL can be made to point to a different thing over time by redirecting it. Knowalittlemore.com for instance points to the ACast site where the podcast lives. I could change that to point to the daily tech news show blog posts about the show instead, if I wanted. So URLs sort of bring in the idea of a dynamic QR code, and so some people refer to static vs. dynamic QR codes. Let’s be clear, they’re all static. So when someone says a QR code is dynamic, it just means it has a URL. The code itself isn’t actually dynamic. But it points to a URL that you know you can redirect to different things. This is helpful for say, a restaurant that changes its menu.
It is also helpful for malicious types who want to do crimes and other malicious behavior.
As I think is clear by now, QR codes themselves are not risky as they only hold static data. QR code readers when working properly would prevent unauthorized executions of that data and there’s not a lot of leeway to make a very capable executable anyway. So the bigger worry is the URL. The practice of encoding URLs in QR codes is widespread, dare we speculate it is the norm, and that means the same risks that come in clicking any URL anywhere come with QR codes. One weakness could be a third party QR code reader that let its permission down a little. But even the most buttoned-down from the OS manufacturer built into the camera app QR code reader –can just take you to a malicious site like any email text message or link on the web.
As such you should only scan QR codes if you’re certain of the source. QR code stickers out in the world might be legitimate or might have been stuck there by someone malicious, possibly over the legitimate code. This doesn’t mean you should never scan a QR code in public but use secure QR code readers and look carefully at the link you’re being sent to before tapping it.
Some malicious links can look to you like they operate normally while engaging in malicious behavior like accessing your browser history or sending text messages without your knowledge.
It’s also good to doublecheck the URL after you tap to make sure it took you where you expected to go. Don’t just look at the graphics or the site layout, those can be faked. And resist the urge to log in, pay for something or download an app from a QR code link. Those are all popular scam vectors. There are legitimate times to use QR codes for that, but you need to be very sure about the legitimacy of the code before you do any of those.
And finally keep in mind that while the actual scanning of a QR code leaks no data, using a QR code to go to a website exposes all the same kinds of data as any visit to a website. Like your IP address, kind of browser and device, etc. This is no worse than browsing the web mind you, but something to keep in mind.
Finally , there are a few variations on the QR code you may encounter.
The Micro QR code holds a very small amount of info but doesn’t take up space so it’s often used on small items. It only has one positioning square in the upper left corner.
Denso Wave has a proprietary version called the IQR code that can be square or rectangular. It works well on cylindrical objects and holds more information than the standard QR Code.
And Frame QR codes take advantage of the error correction process to allow for a canvas area that can be used for logos, graphics etc.
QR codes are just bug dumb links in the world made of squares. Treat them like any big dumb link you’d find anywhere.
In other words, I hope you know a little more about QR codes.