Search Results for "october 17"

About DNS (Rewrite)

KALM-150x150"

The Internet’s directory was once a simple text file on a single computer but has evolved into many directories world-wide that enable the Internet as you know it.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to [email protected]

Episode transcript:

In 1972, four years after the Mother of All Demos, Douglas Engelbart’s Augmentation Research Center might have felt like it was falling apart. More and more folks who had worked on Englebart’s NLS were moving on, many of them up the street to Xerox’s Palo Alto Research Centre.
But that didn’t mean Engelbart’s Augmentation Research Center was closing. Over the past few years ARC had been working with the Advanced Research Project Agency on its new network, the ARPANet. It had launched October 29, 1969, and had 29 computers connected but new hosts weren’t moving as fast. So Bob Kahn at ARPA wanted to show off to everyone what this network could do and why it was worth funding.
Kahn was planning a big demo of ARPANET at the first International Conference on Computer Communications in Washington, DC and needed someone to organize all the info into a handbook to go along with the demonstration.
Elizabeth Feinler, aka “Jake,” was a biochemist. But she was also fascinated with how to compile large amounts of data. She had worked on a project to index all the chemical compounds in the world. And in 1960, joined the Stanford Research Institute where she developed the Handbook of Psychopharmacology and The Chemical Process Economics Handbook.
By 1972, she led the Literature Research section at SRI. And she told Webster University Professor Julia Griffey in 2019 that, of course, she knew Douglas Engelbart
So Engelbart asked her to come over to the ARC team where she wrote the Resource Handbook for the ARPAnet demo.
You don’t put together all the info about how the ARPANet works without becoming a useful resource people rely on when they have questions about how the ARPANet works. By 1974, Feinler was one of the people planning and running the Network Information Center.
NIC was the reference desk of ARPANet. You needed to know something you called NIC. Literally. On the phone. You could also send a letter if your request was less urgent. NIC published a book– the successor of that Resource Handbook from the demo– listing all the protocols of ARPANet and all the registered names and and terminals.
That demo had worked and more computers were connecting to ARPAnet all the time. In 1974, the network working group decided to create a text file to list all the host names, so they didn’t have to keep publishing a pamphlet. You had an information network after all, why not use it.
Feinler took charge of making sure it was updated. She’d keep doing that until 1989 when the domain name system came along and made it easy to find a machine on the now Internet just by typing in a name. Without it, Feinler at 92 might still be updating that hosts file.
So let’s help you Know a Little More about the Domain Name System, aka DNS.
DNS stands for the Domain Name System. It’s essentially the system that lets you type google.com when you want Google search rather than having to remember something like 142.250.68.46.
That string of numbers is an Internet Protocol Address or IP address. That’s actually how computers on the internet talk to each other. They identify as numbers.
Domain Names are associated with those numbers. When you type in a domain name in a browser the browser goes and looks up in a table which number (or more often range of numbers) goes with that domain name so it can find it on the internet. The same way you just go to your friends name in your phone’s contact list to call them. You don’t tap in their phone number by hand.
The Domain Name System provides a worldwide distributed directory of which domain names go with which numbers. It’s not just one table (anymore) it’s lots of tables on lots of servers around the world. So DNS also defines a communication protocol for how all those directories communicate with each other so that any computer can find another on the Internet.
But it did start in 1974 as that HOSTS.TXT file on a machine at the Stanford Research Institute developed and maintained by Elizabeth Feinler. She mapped host names to the numbers she found in the Assigned Numbers List handled by Jon Postel at USC. Feinler and her team managed that list for the ARPANET– and later Internet– until 1989.
But along the way that host table became slow and unwieldy. And on January 1, 1983, the ARPANet and Defense Data Networks switched to the TCP/IP standard and became the Internet, That meant all networks could be connected by a universal language. It also meant a lot of safety nets that existed before were no longer there and there were a number of issues that needed addressing. Some of them were considered very important and got a lot of attention. Others not so much.
That’s Paul Mockapetris talking to the Oxford Internet Institute about taking on the task of automating and publishing the original spec for the domain name system in November 1983.
Four UC Berkeley students wrote a UNIX implementation of the spec called the Berkeley Internet Name Domain or BIND. BIND is still the most widely used DNS software on the Internet. And yes it has been updated several times since then.
The domain name system itself is made up of multiple domains. The most familiar is of course .com. There’s also .org, .net .fr .biz and on and on. Each of those domains has an authority responsible for assigning domain names and mapping them to the corresponding numbers. Each domain has multiple name servers that you can call on to find which domain name goes with which IP addresses.
But it’s not just one server with all the addresses. In fact the process involves different servers for different parts of the domain name.
You see the domain name itself consists of multiple labels. Let’s take knowalittlemore.com The right-most label is the top-level domain .com. Each label to the left specifies a subdivision. So the first to the left is knowalittlemore which is the domain of this show. For websites usually the last label is www to specify that you mean the web server on that domain. So when you type in http://www.knowalittlemore.com you go to the website for knowalittlemore.com not the email server. If you’re thinking you don’t type in www ever, well, browsers can add it for you and you can configure your nameserver to assume www was meant if nothing else (like say SMTP for email) is to the left.
Each label in your domain name can have up to 63 characters. A full domain name with all subdivisions can’t be longer than 253 characters in text or 255 octets of storage in binary.
The characters in a domain name are officially A-Z, 0 through 9 and the hyphen. However the Internationalizing Domain Names in Applications or IDNA system can map international characters into this set so locals can use their own alphabet.
Each domain, like .com. .uk. etc has a set of authoritative name servers that are either primary or secondary. A primary server has the original up to date copy of all domain records. Secondary servers communicate with the primary to automatically update.
In practice, information is cached to speed things up and you’re almost always calling on cached information when you browse. But let’s pretend there was no cache available and you want to go to knowalittlemore.com. The request would start by finding the closest root name server. These are spread throughout the world. The root name server would direct you to the nearest .com name server, that server would then tell you which IP address goes with knowalittlemore.com you’d check there to find out which server is the web server at knowalittlemore.com and potentially with more complicated requests, onward until you get the exact server you’re looking for.
With all these intermediaries it’s possible for malicious actors to figure out how to insert themselves and give you the wrong IP address for a domain that would then take you to a malicious version of the site that might look just like the real site but infect you with malware or something.
Domain Name System Security Extensions or DNSSEC requires each level of DNS server to digitally sign its requests to assure they haven’t been intercepted. It is deployed at the root level but has not been fully deployed across the system because of complexity and also reasons.
As I said, in practice so much of the process is cached, that root name servers get a very small fraction of requests, otherwise they’d get overloaded. Records may be cached in your browser, in your router by your ISP and so on. Cached records have a time to live set on their records so they are forced to go update and look for changes regularly so they stay pretty well up to date.
The name servers record more than just domain name and corresponding IP address. It also includes mail exchanges, known as MX records, domain name aliases known as CNAME as well as responsible persons, there’s even a real-time blackhole list or RBL for combating spam.
And it can do more than just tell you what domain name goes with what address. The DNS can provide the IP address that is closest to the requesting computer. This function is essential to cloud services and content delivery networks. Netflix doesn’t have one machine at Netflix.com. It has thousands and the Domain Name System is the first step in routing your Netflix app to the closest set of Netflix servers so you have the least delay in getting that episode of Stranger Things.
OK so I know a lot of you have questions about registering domains and how that fits in, let’s touch on that briefly.
Registrars
To register a domain name and get its record created in the DNS directory you need to deal with an official domain name registrar. The registrar is different from the registry. Each domain like .com or .us has a registry. The registrar is contracted to handle requests for domain names and collect and verify the information that is then entered into the directory by the registry. Registrars can and do charge fees for this.
And yes registry and registrar are different and really should have been named something that made that a little more obvious.
Let’s use an example: for .com, authorized registrars – like say hover.com– must pay the registry – in the case of .com that’s Verisign. The registrar also pays a small administration fee to ICANN for each domain it handles. The price the public pays the registrar is these fees plus some markup. The maximum registration period is 10 years, though some registrars offer longer periods by legally binding themselves to renew the domain at the end of each ten year period.
There are usually more than one registrar per domain and in fact registrars usually handle more than one domain. Registrars can also authorize resellers as affiliates.
So there you have it. You pay a registrar to register a domain name with a registry and then when someone looks up your domain name the domain name system directory, or likely a cached copy of it, will point a browser to the IP address of your web server.
And Jake Feinler doesn’t have to be involved anymore.
Oh and one more thing. If you’re wondering why Elizabeth Feinler went by Jake, Feinler told the Computer History Museum in 2001, that when she was born in 1931, double names were a fad. Her middle name was Jocelyn, so they called her Betty Jo. Except her sister, Mary Lou, pronounced it Baby Jake. Eventually she grew out of the Baby, but the Jake stuck.
And we thank Jake Feinler for her hard work keeping that Hosts file going in the early days.
In other words, I hope you know a little more about DNS.

CREDITS
Know A Little More is researched, written and hosted by me, Tom Merritt. Editing and production provided by Anthony Lemos and Dog and Pony Show Audio. It’s issued under a Creative Commons Share Attribution 4.0 International License.

About Word Processors

KALM-150x150"

The word processor never got “invented” as such, it just slowly developed out of things like typewriters and even slowly merged from electronic appliances into software. To understand the origin of the word processor you and I would recognize, the one with bold, italic, underline, we need to follow the journey of future space tourist Charles Simonyi.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to [email protected]

Episode transcript:

Charles Simonyi wasn’t the first space tourist. He wasn’t even the first Hungarian in space. But he’s one of the few humans ever to go into space twice. First in April 2007 aboard the Soyuz TMA-10. And a second time in March 2009 aboard the Soyuz TMA-14. Both times as a tourist. Imagine having the time and money to not only pay to go to space once, but twice.
Now imagine as you’re looking down from the International Space Station at that big blue marble below, you glance over at one of the working astronauts plugging away at a log on their laptop. You notice the word processor they’re using and say, “I made that happen.”
That’s probably not what Simonyi said. But he could have. Because Charles Simonyi is not only a two-time space tourist. He also created the modern word processor. Twice. And the second one stuck.
Wait wait wait. Some of you have listened to our episode on the Mother of All Demos. So you know that in 1968, during that demo, Douglas Engelbart showed off a Word Processor.
How is that not the first word processor?
Well. First of all the word processor never got “invented” as such, it just slowly developed out of things like typewriters and even slowly merged from electronic appliances into software.
So Engelbart’s word processor wasn’t the first, it was just the first software demo. It was pretty barebones too. And it never became a product. Rember, the mother of all demos ended up being just a demo. To know who made the modern word processor. To understand the origin of the one you and I would recognize, the one with bold, italic, underline, we need to follow the journey of future space tourist Charles Simonyi.
Simonyi left Hungary in 1965 at age 17 on a short-term visa. He’s in wild violation of that term because he didn’t go back. He started working in Denmark in 1966 on minicomputers and then made it to the US in 1968 to the University of California at Berkeley where he studied under computer scientist Butler Lampson. Simonyi got his BS in Engineering Mathematics and Statistics from Cal in 1972. By then, professor Lampson had taken a job at Xerox’s Palo Alto Research Center, aka Xerox PARC.
Lampson brought Simonyi to Xerox PARC where they were working on developing the Xerox Alto. As we’ve talked about in other episodes, the Alto was the embodiment of the ideas from Douglas Englebart’s Mother of All Demos, and the inspiration for the Mac and Windows PC.
So how did we get from Engelbart’s halting, if impressive, basic word processing to the modern day dominance of Microsoft Word? And why is Charles Simonyi the link between them?
Let’s help you Know a Little More about Word Processors.

Almost every one of you listening to this used a word processor today. Most likely it was Microsoft Word. Or it might have been Google Docs or notepad or text pad or one of literally thousands of other pieces of software that let you compose text. Word Processors are background to computing these days. We don’t even think of them anymore.
But many of you recall that knowing how to use a word processor was once a rare enough skill that you’d highlight it on a resume.
And some of you remember when a word processor was a machine itself, not a piece of software.
Before that it was just typewriters.
We could do a whole episode on typewriters and maybe someday we will. But let’s fast forward through the 1714 patent for Henry Mill’s writing machine, William Austin Burt’s Typographer and Christopher Latham Sholes typewriter or as Scientific American called it at the time the “literary piano.”
Those machines just put letters on paper. So what would you call the first word processor? Something that could erase characters without you having to pull out the whiteout? Typewriters started doing that in the 1930s.
A German IBM typewriter salesperson named Ulrich Steinhilper is often credited with popularizing the term Text-verar-beitung in the 1950s, translated as Word Processing.
But the phrase didn’t really catch on right away.
Perhaps a word processor is the ability to store characters before they’re committed to paper. IBM’s MT/ST for Magnetic Tape/Selectric Typewriter did that in 1964. You could store your typing for later re-use.
Or maybe it was avoiding the need to use paper at all. Something Douglas Englebart showed off in his Mother of All Demos.
But let’s keep following IBM. Because IBM let you store your words on something you could share and edit.
In 1969, IBM moved from magnetic tape to magnetic cards, and in 1971 Introduced the floppy disk. That seems to be a pivotal moment.
That same year, 1971, the New York Times identified word processing as a business buzz word. It was a more specific version of data processing, which was another business buzz word for using computers to run your business.
That floppy disk encouraged other companies to get into the game.
Vydec took advantage of the floppy disk with its Vydec Word Processing system in 1973. For $12,000, much less than a mainframe, you could write, edit and when ready, print documents and even share the floppies with others.
Linolex systems founded in 1970 also adopted the floppy to make standalone word processing systems. In 1975, one year before the introduction of the Apple computer, Linolex sold 3 million units.
Most of these systems had limited screens. In 1978, Lexitron was the first to include a full size CRT monitor and the new smaller 51/4 inch disks.
But Wang Laboratories became synonymous with word processors for a time, with text displayed on a CRT and almost all the modern functions of word processor software today.
None of those are what we’re looking for. They’re all electronic machines doing what we can do with an app today. Something that looks on the screen like what you’re going to print on the paper. And is on a system that does other things besides word processing.
We’re looking for Bravo.
Bravo was the first What You See is What You Get editing system on a multipurpose computer, the Xerox Alto.
And it was created at Xerox PARC by Butler Lampson, and future space tourist, Charles Simonyi and colleagues in 1974. It used the mouse to mark locations of text and then the user would type in commands to affect that text. Bravo was what you would call a modal editor, meaning it had more than one mode. In insert mode, you entered the text, then pressed escape and it was inserted into the selected area of your document. In command mode, you told the program what to do. You had to click twice to select text. Once at the beginning of the text and once at the end. You couldn’t drag the cursor while holding the mouse down like you do now. So mark a sentence with the mouse then enter the command to make it bold for instance.
Side note. One quirk of this was that in early versions of Bravo, the command EDIT was interpreted as one letter commands for: everything, delete, input t. Thus replacing all text with the letter t. You could only undo the most recent command, so once you did this you could undo the insertion of t, but not the deletion of all your selected text.
Anyway they eventually fixed that.
And Bravo wasn’t fully WYSIWYG. It was WYSIWIG in so much as the format looked the same on the screen as the paper, things like justification, fonts, spacing. It did not look exactly like the page since the Alto monitors displayed 72 pixels per inch and Xerox PARC’s laser printers gave you 300 PPI. A special display mode would attempt to show the text as it would appear on the page, though with occasional variances.
BravoX followed in 1979 and was modeless so you didn’t have to switch between command and insert mode.
And Gypsy followed that with a truly modeless word processor. That meant when you typed a character it always typed that character in the document. There was no command mode. No accidentally replacing all your text with Ts. You could also hold down the mouse button and drag the cursor across text to select it. No need to click at the beginning and the end anymore. You could also double click on a word to select it. Once you had your text selected you could then press the CTRL key and B to bold it, I to italicize it or U to underline it. Gypsy also introduced the ability to cut, copy and paste text. Other commands were available in a clickable menu as they are today.
While the Alto never reached mainstream commercial success, word processing began to take off.
In December 1976, filmmaker Michael Shrayer began selling Electric Pencil for the Altair, considered the first word processor for personal computers. It was 14 lines of 64 characters on a monochrome monitor the size of a small black and white TV. Starting in 1977, Science Fiction author Jerry Pournelle used Electric Pencil to edit his novels. He is generally credited with being the first published SciFi author to use a word processor on his published works.
In 1978, the founder of MicroPro International, Seymour Rubinstein, took four months and wrote his own word processor called WordStar for owners of computers running CP/M. Wordstar was rewritten for DOS, and competed with Wordperfect for the word processor market.
But the two companies that would make Word Processing mainstream were Microsoft and Apple. And they would need each other to do it.
Apple’s first attempt at a modeless WYSIWYG word processor was LisaWrite. You “tore off” stationery to start a document and then edited from there. The Lisa did not succeed so many more people are familiar with MacWrite. It shipped on every Mac from 1984 to 1986. And then Apple spun off MacWrite into a new company called Claris in 1987, which continued to publish versions of MacWrite until it was discontinued in 1998.
But you don’t use MacWrite today. You probably use Microsoft Word.
A product of computer scientist and future space tourist Charles Simonyi.
In 1981, Simonyi was still working on research at Xerox PARC with luminaries like Robert Metcalfe, who is credited with inventing ethernet. After Bravo, Simonyi moved on to other projects. He gave input and tech support to the project but he was devoting more time to his idea of meta-programming which he wrote his doctoral dissertation on for Stanford in 1977.
So Metcalfe suggested Simonyi visit Bill Gates at Microsoft. Gates had seen the Alto and was busy trying to incorporate its ideas into Microsoft’s products like Windows. And Gates wanted to start an applications group. He asked Simonyi to start the application group and make his first project a WYSIWYG word processor. Not one that would languish in Xerox PARC, but one that everyone could use.
Simonyi took the job and brought with him a Xerox PARC intern named Richard Brodie as the primary software engineer.
Simonyi made some interesting choices with Word (and Excel which his group also developed.) They ran on a sort of virtual machine which made them easy to port between platforms. And Word supported high-resolution displays and laser printers even though most users didn’t have either.
It was first released as Microsoft Multi-tool Word on October 25, 1983 for Xenix systems, a Unix variant licensed from AT&T by Microsoft, followed by a version or MS-DOS. Free copies of Word were bundled into PC World Magazine in its November 1983 issue, making it the first application distributed in a magazine on disk rather than in printed code.
Microsoft demonstrated Word for Windows that year but its main market was still in DOS. But unlike most DOS programs Word was designed to be used with a Mouse. It had an undo function and the ability to display bold, italicized and underlined text on screen instead of just with markup. But its interface was too different from Wordstar, the one everybody knew.
So Simonyi’s team kept improving it.
By 1985 Word had unlimited undos and the fastest cut and paste in the business. And it also had that largely unused support for high-resolution displays and laser printers and it was easy to port. So it was a no-brainer to bring Word to the brand new Macintosh once it had been out long enough to prove it wasn’t another Lisa. Word for MacOS had all the capabilities of a solid DOS word processor with all the true WYSIWIG of a MacWrite. It was the best of both worlds. And filled a gap for the Mac user that Apple had barely been able to fill. For at least four years Word for MacOS outsold Word for DOS.
That let Apple stop spending resources on MacWrite and it gave Microsoft a revenue stream outside DOS.
The first version of Word for Windows was released in 1989. And by 1990 it was the market leader for word processors.
Word processors after the 1990s are mostly the same with attempts to make what they do easier. But most people type, delete, revise, copy and paste. The biggest innovation in word processing since then has not been to the manipulation of the text so much as the number of people who could work on one document. We’ll get to that in a separate episode on Collaborative editing.
And Charles Simonyi? He stayed at Microsoft introducing things like object-oriented programming, which he had learned at Xerox, and the Hungarian notation conventions for variables which he had introduced in his doctoral thesis.
And finally left Microsoft in 2002 to form Intentional Software, marketing intentional programming concepts which include a WYSIWIG component to programming.
While there he booked a couple tickets to go to space.
Microsoft bought Intentional software on April 18, 2017 and Simonyi became a Microsoft Technical Fellow which he still is today.
So there you go. From typewriters to Engelbart’s Demo, to Simonyi’s work on Bravo and Microsoft Word.
Without them. I’m not able to write the words I’m saying to you write now.
In other words, I hope you know a little more about Word Processors.

CREDITS
Know A Little More is researched, written and hosted by me, Tom Merritt. Editing and production provided by Anthony Lemos and Dog and Pony Show Audio. It’s issued under a Creative Commons Share Attribution 4.0 International License.

About the Computer Mouse

KALM-150x150"

You all know what a mouse is. It’s so common, that you probably don’t even think that much about why it’s called a mouse. But back in 1968, the man generally credited with the invention of the mouse, Douglas Engelbart, had to apologize for what was certainly a silly name.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to [email protected]

Episode transcript:

You all know what a mouse is. It’s so common, that you probably don’t even think that much about why it’s called a mouse. But back in 1968, the man generally credited with the invention of the mouse, Douglas Engelbart, had to apologize for what was certainly a silly name.
So how did we get here? Why do most people associate Steve Jobs and not Douglas Engelabrt with the mouse? And why did this form factor prevail over others?
Let’s help you know a little more about the computer mouse.

The closest relative to the mouse is probably the trackball. Heck, some of you probably prefer a trackball to a mouse. Maybe you’re using one right now.
The first trackball was developed in 1946 as an improvement for fire-control radar plotting systems. Military stuff. The Comprehensive Display System, or CDS, calculated the future position of target aircraft based on inputs provided by a user with a joystick. British Royal Navy Scientist Ralph Benjamin thought that the joystick was a bit clunky. Great for flying an airplane maybe but not so precise when plotting coordinates. So he whipped up a prototype of a metal ball rolling on two rubber coated wheels. The Royal Navy patented the device, and classified it as a military secret. But the prototype was the only one ever made.
In 1952, British electrical engineer Kenyon Taylor created a similar input device for the Royal Canadian Navy’s Digital Automated Tracking and Resolving, or DATR. It used a Canadian five-pin bowling ball and four discs to pick up motion and send X and Y coordinates to a digital computer. Not exactly portable but it worked. That one wasn’t patented but it was classified as a military secret.
Both shared the idea of trying to make it easier to deliver x and y coordinates, but they were both fixed in place. It would take another decade for somebody to develop a movable pointing device.
Those of you who listened to our episode about the Mother of All Demos already know that Douglas Engelbart was inspired by Vanevar Bush’s essay “As We May Think” about the Memex, a machine that could process human information. And that Engelbart set about making Bush’s ideas real while at the Stanford Research Institute, where he established the Augmentation Research Center.
Engelbart and his team developed touch screens, video conference, hypertext, and… the mouse.
He was inspired by the planimeter a tool first developed in the 19th century to measure area by tracing its perimeter. It’s basically a big L shaped thing with two arms, one that stays put and the other that moves around so you can trace the perimeter of something, while a wheel or some other mechanism counts out the measurements.
Engelbart wondered if he could adapt some of the principles of the planimeter to input X and Y coordinates to a computer.
On November 14, 1963, Engerlbart jotted down some notes about something he called the “bug.” It would have a “drop point and two orthogonal wheels.” So basically a small little planimeter on wheels. It would be an improvement on a stylus because you could let go and it would stay at the point you left it.
But it wasn’t Engelbart alone that made it a real thing.
In 1964, fellow Navy alum, Bill English joined Englebart’s lab. He helped turn Englebart’s notes into a working prototype.
Those two wheels he mentioned were perpendicular to each other. One for the X axis and one for the Y. Each wheel was connected to what he called a potentiometer – a fancy name for a transistor that can vary its voltage output. The variance was tied to the rolling of the wheel which could be measured to estimate where the device was and translate that into a coordinate system on the display.
It was a boxy thing, but it did have the “tail”, the wire that connected it to the computer, originally coming out of the front, oddly. But it was that cord that led people to think it looked like a mouse. And for some reason the cursor on screen was referred to as CAT.
So it was too perfect. Not bug. Mouse.
But the mouse wasn’t the only input device Engerlbart, English and the ARC team were working on. In fact it wasn’t even their favorite. There was a joystick, a knee control and a touch screen called the Grafacon. But the darling of the team was the light pen. You pointed it at the screen! It was so easy to pick up and use. So intuitive. Almost everyone on the ARC team thought the Light Pen was the one most people would prefer.
As it happened, NASA’s Bob Taylor was working on flight control systems and was looking for new ways to make computers more useful. A light pen might be perfect. So he got funding authorized for Engerlbart’s team to prove the light pen was the best.
Bob English and Bonnie Huddart led the study. The team developed a series of tasks and timed volunteers doing various things like moving a cursor on a screen to random position.
And the light pen did well.
In fact, inexperienced computer users did best with the light pen and the knee control, since they were easy to understand just by using them. But for experienced users? The mouse outperformed all the other options. By a good amount.
And that test led to the first occurrence of the word “mouse” in print to refer to the input device.
English, Engelbart and Huddart co-authored a report on the experiments for NASA called “Computer-aided display control” that mentioned the mouse 26 times. The first reference (not in the table of contents) was on page 14. Here’s what it said. “Within comfortable reach of the user’s right hand is a device called the “mouse,” which we developed for evaluation (along with others, such as light pen, Grafacon, joystick, etc. ) as a means for selecting those displayed text entities upon which the commands are to operate.”
So the mouse was in use and it had been proven to be a superior way of controlling a computer. All that remained was to let the world know. But Engelbart wouldn’t show off his work to the general public until 1968. Which means he got scooped by a few months.
Unbeknownst to Engelbart, in 1966, engineers at Germany’s AEG-Telefunken began work on an input device that could send x and y coordinates to a display. It was shaped like half a sphere and used a 40-millimeter diameter ball with two mechanical transducers to detect positions. That’s right the familiar ball mouse wasn’t made by Engelbart. You can credit that to Rainer Mallebrin.
As we mentioned earlier, one of the problems with the early trackballs was they were big and had to be fixed in place. Telefunken made one for Germany’s Federal Air Traffic Control Mallebrein had the idea of reversing that trackball. Just turn it upside down. Then you could roll it around and make it movable so you didn’t have to worry about where to mount it.
On October 2, 1968, AEG-Telefunken published a brochure showing off an optional input device for the SIG 1000 vector graphics terminal, called Rollkugel Steuerung (German for “rolling ball control”). They were a little expensive. But they ended up at about twenty universities and the Leibniz Supercomputing Centre in Munich.
Even so, Engelabrt still got to make a splash with his mouse in front of a crowd. And his team gets bragging rights for the name. This episode is not, after all, called About the Rollkugelsteuerung.
On December 9, 1968, Engelbart showed off his mouse during what would be known later as the Mother of All Demos.
Bob English was instrumental in helping Engelbart with the demo. He is credited with figuring out how to connect the SRI lab at Stanford with the Civic Auditorium up in San Francisco.
But again if you listened to the Mother of All Demos episode, you know. While it was impressive, it didn’t directly lead to anything.
So slowly after that high moment, members of the lab began to head off in pursuit of their own interests.
In 1971, Bill English left SRI. He didn’t go far. If he’d walked, it might have taken him about an hour. If you leave Stanford, and head down El Camino Real, take a right on Stanford Avenue, and left on Foothill Expressway, you’ll find yourself at the Palo Alto Research Center. These days it’s just called PARC and is in fact part of SRI International. So you would just be going from one part of the organization to the other.
But in 1971 it was better funded and it was called Xerox PARC.
English managed its Office Systems Research Group. And he borrowed that ball idea from Telefunken to create a mouse for Xerox.
It was part of the legendary Xerox Alto released in 1973. Alto was the first desktop computer to use a graphic user interface and a mouse.
Following the Alto, ETH Zurich shipped its Lilith computer with a version of a mouse as well.
But the one most computer folks of the time will remember was the mouse that came with the Xerox 8010 Star personal computer in 1981. If you had the $16,500 to buy one anyway. It became the best known computer with a mouse to that time, but it was still an obscure device. Jack Hawley manufactured the mouse for Xerox and pretty much had the market to himself.
Competition arrived slowly. Logitech, probably the top mouse brand in 2023, showed off its first mouse, the P4, at Comdex in 1982. Microsoft made Microsoft Word mouse-compatible that same year and shipped its own mouse a year later in 1983, the first product from Microsoft Hardware.
Apple’s ill-fated Lisa, its first attempt to replicate and modernize the Xerox Alto came out in 1983 with a mouse. But it’s a legendary flop.
It was January 30, 1984 that changed the course of the mouse.
Apple’s Macintosh did a lot of things right, including the mouse. Remember that study back in 1965 that found the mouse was best for experienced users.
The Mac wasn’t meant for experienced users. So they built in a guide to get you up to speed. It was so important, that Rony Sebak, the person who wrote the guide, was up on stage, with the engineers of RAM and circuit boards during the announcement!
That made the mouse mainstream.
Over time the mouse lost much of what made it mouse-like. Both Steven Kirsch and Richard F. Lyon independently created an optical mouse in 1980. First with LEDs and later with lasers, the optical mouse replaced the need for the ball to detect position. And eliminated the need to pull the ball out and clean it. Sometimes by inadvisedly popping it in your mouth.
The Mouse lost its tail too. The first wireless mouse came along in September 1984. Logitech made the infrared mouse for the Metaphor computer, made by former Xerox PARC employees David Liddle and Donald Massaro. Infrared needed line of sight though. But RF and later Bluetooth would make the wireless mouse mainstream.
And these days there’s a seemingly infinite variety. Pucks, force feedback, tower-like ergonomic forms and more.
Even the number of buttons has changed. Engelbart’s original prototype had one button. The one he demonstrated in 1968 had three. The Alto’s also had three. Microsoft’s had two. Apple went back to one. Today a mouse usually has one or two main buttons but also scroll wheels side buttons and more customizations.
But at base they all work the same way. Detect movement in two dimensions and then translate that into data a display can use to replicate that movement with a graphic on a screen.
Engelbart made $10,000 off the invention of the mouse, a lump sum paid by SRI for the patent. English didn’t get rich off the mouse and retire either. After leaving Xerox in 1989, he worked for Sun Microsystems for years.
And what about Bonnie Huddart? The other name on that report that first mentioned the mouse. She left SRI shortly after its publication and became the first director of the computer center at Reed College in Portland, Oregon.
The only one that made much money off the mouse was SRI which held the patent. It was granted in 1970 and expired in 1987. But even SRI didn’t make much. Xerox, Microsoft and Apple all licensed it from SRI. The general belief is that Apple paid around $40,000 for its license, though there’s no definitive record. Suffice to say it wasn’t a lot of money considering how transformative the mouse would become.
But that’s not why Engelbart did any of this. He wasn’t trying to make a bundle of money, heck he wasn’t even trying to invent the mouse.
He was trying to make the world better.
In 2008 he spoke to a crowd at Google as part of a panel called ‘Inventing the Computer Mouse. He talked about how he got started developing technology like the mouse.
I’d argue that the mouse has played a pretty pivotal part in making the world better.
And I hope now you know a little more about the Computer Mouse.

CREDITS
Know A Little More is researched, written and hosted by me, Tom Merritt. Editing and production provided by Anthony Lemos and Dog and Pony Show Audio. It’s issued under a Creative Commons Share Attribution 4.0 International License.

YouTube Traffic Signals Problems – DTNS 4595

October 1st, Microsoft will be separating Teams from the Microsoft 365 and Office 365 suites in the European Economic Area and also Switzerland to head off antitrust regulators. Plus How is a YouTube invalid traffic bug causing problems for YouTubers. Tasia Custode from “AI Named This Show” explains. And the Philips Hue Line Gains Smart Cameras and Sensors.

Starring Sarah Lane, Robb Dunewood, Tasia Custode, Roger Chang, Joe.

MP3 Download


Using a Screen Reader? Click here

Download the (VIDEO VERSION), here.

Follow us on Twitter Instgram YouTube and Twitch

Please SUBSCRIBE HERE.

Subscribe through Apple Podcasts.

A special thanks to all our supporters–without you, none of this would be possible.

If you enjoy what you see you can support the show on Patreon, Thank you!

Become a Patron!

Big thanks to Dan Lueders for the headlines music and Martin Bell for the opening theme!

Big thanks to Mustafa A. from thepolarcat.com for the logo!

Thanks to our mods Jack_Shid and KAPT_Kipper on the subreddit

Send to email to [email protected]

Show Notes
To read the show notes in a separate page click here!


About Section 230 (May 2023 Update)

KALM-150x150"

We update the history of Section 230 in light of the recent Supreme Court decisions. What it is, what it isn’t and how those decisions affected or didn’t affect the future of the “safe harbor” law in the US.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to [email protected]

Episode transcript:

The US Supreme court has decided two cases that challenged protections of Section 230 of the US Communications Decency Act and in both cases the court decided not to touch those protections. In oral arguments for the cases the court indicated they felt maybe Congress should be the one to do that.
Twitter v. Taamneh argued that Twitter provided unlawful material support for failing to remove users from its platform. Gonzalez v. Google claimed that a platform, in this case, YouTube, should be liable for content it recommended to users.
A lot of people misunderstand what Section 230 does and doesn’t do. So in this updated episode, I’ll cover the basics of what it is and what it isn’t and what the court did and did not say in these landmark cases.

We covered the history and meaning of Section 230 in depth in the episode About Safe Harbor in July 2020. So if you want the deep dive please listen to that.
This episode will focus on how to properly explain and think about Section 230 no matter what argument you may or may not be trying to make. You may think Section 230 promotes censorship. You may think it protects big tech companies from responsibility. You may think it should be repealed. Those are all reasonable positions to take. But I often hear people argue these sorts of positions from a starting point that is wrong. I just want to give you the correct starting point from which you can make your argument.
So let’s start with the folks who say we should just get rid of it. There is a misconception that if we get rid of Section 230 companies would have to take responsibility for the content on their platform or that they would have to stop censoring. Neither one of those things is assured.
Without Section 230, ANY platform. And it’s worth pointing out this applies to a forum you might run on your own website, as well as to Facebook. Without Section 230, any platform would be seen in the eyes of the law as either a publisher of information or a distributor. A publisher is responsible for what it publishes. A distributor is not responsible for the contents of what it distributes.
The easiest way to think about this is a brick and mortar bookstore. The publisher of the books and magazines it sells are responsible for what’s in the books and magazines. The book store is just the distributor. In fact a 1959 Supreme Court case ruled that a bookstore owner cannot be reasonably expected to know the content of every book it sells. They should only be liable if they know or should have known that selling something was specifically illegal. Otherwise the publisher is liable for what’s in the book or magazine.
Now let’s think about that for a minute. The bookstore can decide what magazines to carry. But it’s not deciding what’s in the magazine. It isn’t allowed to sell magazines that it knows are illegal but it’s not expected to read every word of every magazine to police its content.
On the other hand, letters to the editor published in the magazines are in fact the responsibility of the publisher. Just because a reader wrote the letter doesn’t mean the publisher had to print it. It CHOSE to print it. It exercised editorial control, and therefore is liable for what the reader wrote.
The publisher of the content is not protected from liability. But the bookstore gets protection because it’s not exercising editorial control of what’s in the books. It’s a distributor.
Fast forward to the 1990s. Compuserve and Prodigy are vibrant new parts of the internet where people are talking to each other like never before.
It’s April 1990. Sinead O’Connor’s new song Nothing Compares 2 U (written by Prince) tops the Billboard charts.
Robert Blanchard has developed Skuttlebut, a database for TV news and radio gossip. It’s a new competitor for a similar service called Rumorville, published over on Compuserve’s Journalism forum. Skuttlebut and Rumorville are in stiff competition for the burgeoning online audience that wants TV and radio news industry gossip. This is FIVE YEARS before the Drudge Report mind you.
In the heat of the competition Rumorville posts that Skuttlebutt has been getting info from a back door at Rumorville, that Skuttlebutt’s owner, Robert Blanchard got “bounced” by WABC, And described Skuttlebutt as a “scam.”
So Skuttlebutt’s owner Cubby, sued Rumorville’s parent company, but also sued Compuserve as the publisher. But here’s the thing. Compuserve did not review Rumorville’s content. Once it was uploaded, it was available. Compuserve also didn’t get any money from Rumorville. The only money it made was off the subscribers to Compuserve itself, whether they read Rumorville or not.
In Cubby Inc. v Compuserve, the judge ruled that Compuserve was not a publisher. It was a distributor. It could not reasonably know what was in the thousands of publications it carried on its service. Therefore, like a bookstore, Compuserve was not liable for what was published in Rumorville.
Reminder. This is without Section 230. The platform was not exercising control over the content so it was not liable for what was in it.
On to October 1994. Boyz II Men is dominating the charts with a long run at number one with “I’ll Make Love to You.”
Prodigy’s Money Talk message board is still awash in talk about the bond market crisis. And an anonymous user posted that securities investment firm Stratton Oakmont had committed crime and fraud related to a stock IPO. Stratton Oakmont takes exception to what it considers defamation and files a lawsuit against Prodigy alleging the company is the publisher of the information.
So you’d think, given the Compuserve case that Prodigy is in good shape. It didn’t publish the comments the commenter did.
Except. It’s been a few years, and a few raging internet flame wars later, and Prodigy, like many other platforms, has developed some Content Guidelines for users to follow. It also has Board Leaders who are charged with enforcing those guidelines. And Prodigy even uses some automated software to screen for offensive language. This is all good community moderation practice right? Clear set of guidelines. Consequences if you violate them. And even some automated ways to keep some of the bad stuff from ever even showing up.
The court looked at that and said, well, looks to us like you’re exercising editorial control. You’re deciding who gets to post what. That feels a lot more like the letters to the editor than it does the bookstore. The court wrote “Prodigy’s conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability than CompuServe and other computer networks that make no such choice.”
In Stratton Oakmont v. Prodigy, the court ruled in favor of Stratton Oakmont.
After that case the law stands that courts will give you the protection of a distributor, as long as you don’t moderate. If you moderate the content, you’re on the hook for it.
So in other words before Section 230, you could either leave everything up or you’d have to be responsible for everything, meaning you’d have to pre-screen all posts. Your choice is either zero moderation or prior restraint.
Republican Chris Cox and Democrat Ron Wyden both thought this was not an ideal situation. So they wrote Section 230 of the Communications Decency Act which read “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Those are the 26 words usually cited as section 230. But that’s just paragraph 1 of subsection c. But there’s a second subparagraph of section c which is also important. It’s called Civil liability It reads:
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)
In other words, even if it’s protected free speech, the platform can take down content it finds objectionable and not lose its protections from liability for other content.
All of this is a long way to say if the platform didn’t create the content, it’s not responsible for it. ..with a few exceptions.
This is another part of the discussion of Section 230 that gets left out. Section 230 specifically says that this law will have no effect on criminal law, intellectual property law, communications privacy law or sex trafficking law. So the DMCA for example still has to be followed. You have to respond to copyright takedown notices.
So back to the two Supreme Court cases Twitter v. Taamneh and Gonzalez v. Google.
We have to remember that platforms are still responsible for content THEY generate.
If Facebook’s own staff post on Facebook defaming you, Section 230 does not protect it. Section 230 only means Facebook is not on the hook for what I post.
So what about recommendations? What about the stuff in my feed that Facebook chose to show me without my input? Facebook didn’t create the content but it chose to show it to me, specifically not to everyone. That would certainly count as editorial control before Section 230, but Section 230 was put in place specifically to allow a measure of editorial control– removal of posts– without having to take responsibility for all posts.
Also remember that “terrorist” content qualifies as criminal content which Section 230 does not protect. So how long can criminal content be up before a platform “should” have known about it and taken it down? Specific to the case of Taamneh vs. Twitter, is Twitter “aiding abetting” terrorists when it failed to remove such content?
Bearing on both the question of algorithms and criminal content is one more case that tested Section 230 shortly after it became law.
It’s April 25, 1995. Montell Jordan’s “This is How We Do It” tops the charts.
And someone has posted a message on an AOL Bulletin Board called “Naughty Oklahoma T-Shirts” describing the sale of shirts featuring offensive and tasteless slogans related to the Oklahoma City bombings which had happened 6 days before. The posting listed the phone number of Kenneth Zeran in Seattle, Washington who had no knowledge of the posting. He then received a high volume of calls, mostly angry about the post. Some calls were death threats. Zeran called AOL which said they would remove the post. However the next day a new post was made and new posts were made over the next four days. One of the posts was picked up by a radio announcer at KRXO in Oklahoma City who encouraged listeners to call the number. Zeran required police protection and sued KRXO and then separately AOL.
In its decision, the United States Court of Appeals for the Fourth Circuit wrote “It would be impossible for service providers to screen each of their millions of postings for possible problems. Faced with potential liability for each message republished by their services, interactive computer service providers might choose to severely restrict the number and type of messages posted. Congress considered the weight of the speech interests implicated and chose to immunize service providers to avoid any such restrictive effect.”
It also wrote that Section 230 “creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service. Thus, lawsuits seeking to hold a service provider liable for its
exercise of a publisher’s traditional editorial functions — such as
deciding whether to publish, withdraw, postpone or alter content —
are barred.”
Zeran argued that even if AOL wasn’t a publisher, it was a Distributor and under the 1959 case, a distributor would still need to be responsible for speech it knew was defamatory. And Zeran argued AOL knew, because he called them about it after the first post. The judge however says that AOL is a publisher not a distributor plain and simple. But Section 230 shields them from the liability normally afforded a publisher. So you can’t just redefine them.
This ended up as a stricter protection for a distributor than the 1959 case. Instead of having to take it down once you know about it. Internet services were given a broader shield.
And that became the principle justification for CDA 230.
And if the Supreme Court follows that precedent it might also consider recommendations to be publishing behavior and therefore protected.
However that’s not what happened. Instead the court seems to think that algorithmic recommendations are new enough that Section 230 doesn’t properly apply to them.
During oral arguments for Gonzalez v. Google on February 22, 2023, multiple Justices indicated they thought Congress should rule on whether algorithmic recommendations should be considered to cause liability or not.
Justice Elana Kagan said “This was a pre-algorithm statute, and everyone is trying their best to figure out how this statute applies. Every time anyone looks at anything on the internet, there is an algorithm involved.”
Justice Ketanji Brown Jackson said, “To the extent that the question today is can we be sued for making recommendations, that’s just not something the statute was directed to.”
And Justice Bret Kavanaugh said “Isn’t it better to keep it the way it is, for us, and to put the burden on Congress to change that, and they can consider the implications and make these predictive judgments?”
Then on May 18, 2023, the court issued its decision in both cases. Both unanimous.
In Twitter vs. Taamneh, the court dismissed the allegations that Twitter violated the US Antiterrorism Act by failing to remove posts before a deadly attack. Justice Clarence Thomas wrote the opinion for the unanimous decision, saying that Twitter’s failure to police content was not an “affirmative act.
And he expressed concern that if aiding-and-abetting liability is taken too far merchants could become liable for misuse of their goods. He pointed out that email service providers should not be held liable for the contents of email. In fact he explicitly compared Twitter to email and cell phone providers who aren’t culpable for their users behavior. A cell phone service provider is not culpable for the illegal drug deals made over their phones.
Specifically regarding Twitter he wrote “There are no allegations that defendants treated ISIS any differently from anyone else. Rather, defendants’ relationship with ISIS and its supporters appears to have been the same as their relationship with their billion-plus other users: arm’s length, passive, and largely indifferent.”
And he even touched on the main issue from the other case, algorithmic recommendations. He wrote, “the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting.”
That all meant they could essentially dodge the entire issue in Gonzalez vs. Google, which had rested more on YouTube being liable for its recommendations.
In an unsigned opinion the court wrote that the “liability claims are materially identical to those at issue in Twitter…” And “Since we hold that the complaint in that case fails to state a claim for aiding and abetting … it appears to follow that the complaint here likewise fails to state such a claim.” And “we therefore decline to address the application of section 230.” So the claims in Gonzalez were also dismissed.
In essence these opinions are saying that if algorithms are not specific to a kind of content, then it doesn’t make recommending an “affirmative act.” And if you want to change that then Congress needs to pass a new law.
These two decisions left Section 230 unchanged.
In the end what I want folks to take away is that Section 230 doesn’t free a tech platform to do whatever it wants. It frees a platform to choose to moderate and exercise editorial control over the posts of others without having to assume responsibility for the thousands, and now millions of posts made every day.
It’s reasonable to argue that perhaps there are some responsibilities that should be restored to tech platforms through legislation. I think it’s worth pointing out that repealing Section 230 altogether would not necessarily achieve that.
So I hope now you have a firmer basis upon which to base your opinion whatever it is. In other words, I hope you know a little more about section 230.

CREDITS
Know A Little More is researched, written and hosted by me, Tom Merritt. Editing and production provided by Anthony Lemos in conjunction with Will Sattelberg and Dog and Pony Show Audio. It’s issued under a Creative Commons Share Attribution 4.0 International License.

About the DMCA (Updated)

KALM-150x150"

By 1998 the US had passed its Digital Millennium Copyright Act. And partly because the US generates so much copyrightable material, and partly just because it’s the US and is a little pushy on the world stage, the DMCA became the de facto way of handling copyright protections on the internet around the world.

But what is it? Why did we need the DMCA or the WIPO copyright treaty at all?

Let’s help you Know a Little more about the DMCA.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to [email protected]

Episode transcript:

It’s April 26, 1970. Joe Cocker is playing live at the Fillmore. The Jackson 5’s ABC is dominating the charts In Novo Mesto, Slovenia, little Melanija Knavs is born. And after three years of planning, the World Intellectual Property Organization has begun operations. The purpose of the specialized agency is to provide a place for countries to work together on their various intellectual property laws and rules. Copyright is of course the most well known type of intellectual property these days but it also includes trademarks and patents and such. WIPO is meant to be a clearing house. A place to try to harmonize. I’ll respect your patents if you respect mine etc. In fact its first big achievement is the Patent Cooperation Treaty which to oversimplify, made filing a patent in one country equivalent to filing in all. Now different countries still had latitude to approve or deny patents according to their own laws, but it made things a lot simpler.
WIPO made lots of other treaties and systems to make it easier to handle trademarks and service marks. It created mediation and arbitration to help resolve disputes between countries over these kinds of matters.
And in September 1995 it took up the digital agenda. Copyright came to the fore. And somehow. Some way, WIPO agreed on new rules faster than it almost ever agreed on anything. By December 1996 there was a diplomatic conference to approve the WIPO Copyright Treaty and the WIPO Performances and Phonograms Treaty.
Those two treaties brought countries together to agree on how to handle digital copyright protection. Each country then had to pass its own law to implement the treaty.
By 1998 the US had passed its Digital Millennium Copyright Act. And partly because the US generates so much copyrightable material, and partly just because it’s the US and is a little pushy on the world stage, the DMCA became the de facto way of handling copyright protections on the internet around the world.
But what is it? Why did we need the DMCA or the WIPO copyright treaty at all?
Let’s help you Know a Little more about the DMCA.

Since the Internet became more than just something university IT experts used, worries about copyright violations on the internet have existed.
Digital content is infinitely copyable and the internet makes it infinitely transferable. That’s a nightmare for businesses built on physical limitations to copying, like music, movies and others.
To extend these older business models onto the internet, companies use digital rights management or DRM. This is a name for varying ways of trying to lock up content so that only a user who is authorized to view it can. It’s an attempt to make it not be infinitely copyable. DRM is tricky though because you have to balance access for the person who does have the right– like a paying customer– with denying access to anyone who doesn’t. Those are cross-purposes. If you leave a door open for authorized viewers, eventually unauthorized viewers will figure a way into it.
So the industry quickly turned to the law, and we get the Digital Millennium Copyright Act. Or DMCA. While this is only a law in the US, it affects anyone who publishes content in the US, such as on YouTube and has provided a model for laws like it around the world.
The problem it solves is that no matter what digital locks you put on a file, someone can figure out a way to break them. So the law fixes this by making it illegal to break them.
That’s one of the main misunderstandings about the DMCA. It doesn’t just make unauthorized access illegal. That was already illegal under copyright law. It makes circumventing access protections illegal, punishable by fines and imprisonment.
Copyright holders can seek up to $2,500 per violation, or statutory damages up to $25,000. Repeat offenders can face more. If you are accused of willfully violating the DMCA for personal or commercial financial gain, you can be tried as a criminal offender. A first-time criminal DMCA violator can face a fine of up to $500,000, up to five years in jail, or both. Repeat offenders can be fined up to a million dollars and up to ten years in prison.
Screen capturing is a circumvention of the DMCA in some cases. Keep that in mind.
The DMCA was passed as an amendment to the US Copyright Act in 1998. It implemented those two 1996 treaties of the World Intellectual Property Organization.
It makes it illegal to produce or disseminate (even if you give them away free) any device or service INTENDED to circumvent measures that control access to copyrighted works. Courts decide whether a device or service is intended to do this. Because you know, computers can do this, but it’s not their sole intention. And that’s why screen-capturing software is not just illegal.
The other aspect of the DMCA is it makes it illegal to circumvent access control EVEN IF copyright is not infringed. Yep. If you have a fair use for something, like making a backup of a DVD, it is illegal under the DMCA to circumvent copyright protection in order to make fair use of that backup. The DMCA includes some limited exemptions such as for security research and government research but they are few.
Now if you’re saying hold on I thought they changed that and made DVD copying legal. We’ll get to that later but yes and no.
There are a couple more aspects of this to keep in mind. One, is that the United States Copyright Office ( part of the Library of Congress) was given the power to create (and get rid of) further exemptions to the DMCA. So it can restore fair uses on a case by case basis. More on that later.
And then there’s a safe harbor for platforms. Online Service Providers, which includes platforms like YouTube and Facebook– are exempt from liability for their users copyright infringement as long as they follow certain procedures. Platforms keep their safe harbor by promptly blocking access to infringing material once they are notified of an infringement claim. This called the “notice and takedown” process. It also provides for a counter notification from a user who claims the material is not infringing.
There’s also an exemption for a repair person who makes limited copies solely for the purpose of repairing a machine. In other words, imaging a drive to restore it on a replacement drive doesn’t violate the DMCA. There are also some provisions for distance education, ephemeral copies made in the process of broadcasting and more.
DMCA’s Title V is my favorite. Title V provides protection for Boat Hull designs because Boat Hull designs are not covered by copyright as they cannot be separated from their useful function and therefore are better protected by patents than copyright. This section of the DMCA was added in 1998 after the Supreme Court ruled — in Bonito Boats, Inc. v. Thunder Craft Boats, Inc. –that Boat Hulls did not have copyright protection. So, immediately boat manufacturers lobbied congress to add the exemption to the DMCA. As of 2019 there have been 538 applications for registrations for Boat Hull designs under the DMCA, compared to more than 70,000 patents granted.
OK back to the notice and takedown system.
The notice and takedown system is governed by Section 512 of the DMCA.
In order to get the safe harbor protection, a service provider has to have an agent on file who takes notifications. The provider can’t have reasonably known about the infringing activity or directly benefit financially from infringing activity. In other words your main business can’t be infringement.
Ok. Now you’re a safe harbor protected platform. How does it work if somebody thinks their copyright has been infringed on your platform?
Well it works differently for every system but here are the parts required by Section 512.
The notifier must send a formal takedown request notification under penalty of perjury. They can’t knowingly lie about it.
Once a notice is received, the provider must “expeditiously take down or block access to the material.” right away. No grace period. It must also promptly notify the user that the content has been removed or disabled.
The user can then file a counter-notification, also under penalty of perjury, that its content was identified as infringing through a mistake or misidentification.
That sends it back to the notifier. If they do not file a court order against the user, the provider must restore the content within 10-14 days.
So yes. Send a takedown notice the content goes down immediately. Send a counter notice it takes 10-14 days to get it back up.
So you could abuse the system by just sending notices for anything you wanted to disappear from the internet for a couple of weeks right?
Well, those perjury conditions are meant to keep the system from being abused but in practice they’re hard to prove. Just being mistaken is not the same as perjury so you have to prove that a company KNEW the content was not infringing when it sent the notice, not just that it was mistaken. And end users are much more likely not to want to risk a perjury lawsuit than the large companies who send bulk notices, so most takedown notices are successful. But willful and malicious abuses are rare. Mistakes however, are rampant. Lots of companies have been accused of sending inaccurate bulk takedown notices, sometimes ending up affecting their own employees. But that’s not the same as perjury.
Also there is a chilling effect of the DMCA. A content-hosting platform can avoid falling afoul of the DMCA by just not hosting some material altogether. It’s not required to host it. So some companies, like YouTube have employed “informal” takedown notices that are not meant to be the legally required notices. These are usually constructed as terms of service violations. This lets them take down content without risking the perjury charge. Companies have the right to operate outside the DMCA in this way because the law can’t force them to host content they don’t want to. A copyright holder is only subject to perjury restrictions if they are following a “formal” takedown procedure. YouTube does have a method of proceeding from informal takedowns to formal ones.
For years YouTube used a bot system called ContentID to look for possibly infringing content. If the bot thought it saw a match to a database of content provided by big copyright holders, it would pull the content off the site and notify the user it had been pulled. This was not part of the DMCA.
If the user disputed the Content ID claim, YouTube would then contact the alleged rights holder. The rightsholder could release the claim and the content would go back up or could uphold the claim and the user would be notified that the rights holder still claimed the content was infringing and it would stay down. This was partly DMCA as this could also serve as the rightsholder’s formal Takedown notice. But since the bot had identified the content as infringing the risk of perjury for the rights holder was almost nothing.
If the user did not have an account in good standing or had already appealed three other claims that was it. The DMCA never entered into it for the user. YouTube just declined to host the content because they didn’t want to.
However if the user was in good standing and had not reached the appeal limit a DMCA counterclaim would then be issued to the rightsholder — with the risk of perjury for the user still there– and the normal DMCA takedown procedure would take place. The rightsholder would then have to decide whether to pursue it in court or not.
As I mentioned earlier the US Copyright Office can make exemptions to the DMCA. It regularly reviews exemptions and can add, extend or remove them.
The Copyright Office has issued exemptions to the DMCA over the years. Here’s a look at a few of them.
The first two in 2000 were for website filtering — you know like safe sites for kids kind of stuff– and preservation of damaged or obsolete software and databases.
In 2003 an exemption was given to screen readers for e-books and one for video games distributed in obsolete formats.
A brief exemption was given in 2006 for sound recordings protected by software with security flaws, specifically the Sony Rootkit. And one for unlocking wireless phones.
In 2010 an exemption for breaking DVD’s Content Scrambling System was issued for educational, documentary, noncommercial or preservation uses. Security testing of video games.
In 2012 an exemption for excerpting short portions of movies for criticism or comment was given.
In 2018 one for 3D printers if the sole purpose is to use alternate feedstock. As well as ones to expand exemptions for preservation and security research.
In October 2021, an exemption was given for repairing any consumer device that relies on software as well as medical devices and land sea and air vehicles even if they aren’t consumer-focused.
What if you’re outside the US? Why should you care? On the one hand, you’re right, US law doesn’t apply outside the US. However copyright owners from outside the US can still issue takedown notices on US sites. But the bigger thing to remember is that the DMCA is the US implementation of the WIPO Copyright Treaty and the WIPO Performances and Phonograms Treaty. The WIPO Copyright treaty was signed by 110 countries and most members of the World Intellectual Property Organization or WIPO have agreed to accept DMCA takedown notices. Think of it like this. A country adopted the WIPO treaties, the US created a system to enforce it and the country just borrows that system. It’s not that US law is enforceable in their country it’s that the US enforcement system of the WIPO treaty is a nice prepackaged way to do things. Copyright-enforcement as a service!
Some countries however are known as DMCA ignored countries. These are countries who either have not agreed to WIPO’s provisions, systematically ignore those provisions or prioritize their own copyright laws over those of the US and so websites do not honor DMCA requests.
These include Russia, Bulgaria. Luxembourg, the Netherlands, Hong Kong, Singapore, Malaysia, Switzerland, and Moldova. They are often promoted as places to host websites if you’re concerned about copyright infringement, though each carries its own set of concerns either with local laws or political speech. China doesn’t necessarily honor the DMCA, but has enough other restrictions that it’s not generally not included on these lists.
Nobody loves the DMCA but it has proved to be surprisingly stable. It’s next big test will be machine generated works like ChatGPT and the multiple text-to-image generators.
So far the discussions have been about where copyright applies but that is going to drift into the DMCA and put its uneasy equilibrium to the test.
For example, in April 2023 an unknown composer created a song and used some machine generation to make it sound like Drake and the Weeknd. The song lyrics and beats were original but the artist had used a producer tag that was not. Universal Music Group used that producer tag as the basis for copyright takedowns. But versions without the tag would force the issue.
That’s the first not the last of what will be a long discussion about where machine-generated works fall in copyright. How that discussion plays out will likely determine whether the DMCA stays standing, gets modified or rewritten altogether.
So that is the Digital Millennium Copyright Act, aka the DMCA.
It makes it illegal to circumvent copyright protection unless there is an exemption written in the act itself, or added by the US Copyright Office.
It also provides a way to try to get infringing material removed and a way for a user to combat having that material removed.
I hope this helps you understand why some content is allowed up and some is not and why you don’t see some content at all.
In other words, I hope you know a little more about the DMCA.

CREDITS
Know A Little More is researched, written and hosted by me, Tom Merritt. Editing and production provided by Anthony Lemos in conjunction with Will Sattelberg and Dog and Pony Show Audio. It’s issued under a Creative Commons Share Attribution 4.0 International License.

About Taiwan

KALM-150x150"

Taiwan is at once one of the most vexing political situations on the globe and one of the most important to the world of technology.

But few people understand how it got to be either. And understanding that is essential to understanding what might happen next and how that matters a LOT for the technology industry.

Let’s help you Know a Little More about Taiwan.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to [email protected]

Episode transcript:

Taiwan is at once one of the most vexing political situations on the globe and one of the most important to the world of technology.
But few people understand how it got to be either. And understanding that is essential to understanding what might happen next and how that matters a LOT for the technology industry.
Let’s help you Know a Little More about Taiwan.
This is not going to be a Dan Carlin-style dive into the history of Taiwan. If you know that history consider this a refresher. But for those of you who know little about the island, consider this an excellent starting point to understanding it. And for all of you, I don’t think understanding it is likely to get less important in the coming years. Because it’s one of the places on Earth where it’s conceivable to see a war involving China and the US. AND it’s one of the most important places in the world for building technology. Chips are in everything these days and the chips are made mostly by companies from Taiwan.
Let’s start with the where.
Taiwan is 168 islands including the Penghu islands but it is mostly one main island with the three main cities, Taipei, Tainan and Taichung. It’s about the size of Vermont. Or Albania.
It is located partway between the Philippines and South Korea, but very close to mainland China. It is 160 kilometers off the coast of southeastern China. About the distance from Dublin to Belfast. You could not fit Ireland between Taiwan and the mainland.
OK so why it’s important is both the tech industry, which we’ll get to later AND the dispute over it.
Let’s start with the dispute over what Taiwan thinks it is and what China thinks it is. Because Taiwan thinks it is China. This is one of the most common confusions I hear from people.
Taiwan’s government officially calls the country, the Republic of China. Well that’s odd, you might say. Isn’t there already a Republic of China? Yes. The People’s Republic of China. That’s the one most people think of as China. The one with its capital in Beijing. Both the People’s Republic of China and the Republic of China, which is on Taiwan, consider themselves the legitimate successor to the Republic founded in China on January 1, 1912 after the overthrow of the Qing dynasty. Where they differ is that Taiwan considers itself the true continuation of that republic and the People’s Republic of China says that republic ended in 1949 and was replaced by the People’s Republic.
So what Taiwan is, depends on who you ask? The People’s Republic of China – to oversimplify– says Taiwan is a breakaway province that is part of the People’s Republic. Eventually it needs to stop denying that fact, cooperate with the central government and unify with the mainland. Hence China’s strong objections to calling Taiwan a country, or having full diplomatic relations with it. The US wouldn’t want anyone having diplomatic relations with Texas, or Hawaii. The Uk doesn’t let Scotland go have separate diplomatic relations with other countries.
Meanwhile, the government of Taiwan still considers itself the rightful ruler of all of China. Hence its insistence on officially calling itself Republic of China.
And so you get weird situations like letting Taiwanese athletes compete separately at the Olympics, but only if they call themselves Chinese Taipei and use the Olympic flag.
There are other similar arrangements. For example England, Scotland, and Wales, all part of the UK, compete in World Cup competition as separate teams. The difference being that they are not all calling themselves the true UK.
I bring it up to illustrate the point the People’s Republic of China is making. If Taiwan is just a province of China, then it’s not odd to let it compete separately in things. So call it under a provincial name, and throw Chinese at the front just so people are clear. Taiwan goes along with this so its athletes can compete separately and they consider themselves China as well, so why not call them Chinese Taipei. It’s WAY more complicated than that but you get the gist and it kind of helps illustrate how seriously these countries take the “on paper” meanings of this dispute.
One thing the two countries agree on is that the Republic of China started in 1912.
Sun Yat-sen was the the founder and first provisional president. He is honored by both the People’s Republic of China and the Republic of China on Taiwan for ending the rule of China’s imperial dynasties. But it didn’t result in stability right away. China’s political history in the 1920s and 1930s is full of disputes between the Nationalist Party – aka the Kuomintang and the Communist Party. Sometimes those disputes became battles. The two parties teamed up in World War II to fight their common enemies, but never fully unified. The People’s Republic of China consider the Republican era to have ended on October 1, 1949 with the proclamation of the People’s Republic of China. The Republic of China in Taiwan, disagrees.
So that’s where the Republic of China and the People’s Republic of China come from.
Let’s talk a little about Taiwan and how it became part of China in the first place.
The smaller Penghu islands off the coast of the main China were often under mainland sway, the larger island though was somewhat independent. In 1622 the Europeans arrived, first the Dutch, then the Spanish. The Dutch called it Ilha Formosa or “the beautiful island” which was shortened to Formosa, which became the European name for the island. The Chinese didn’t like the idea of the Europeans getting too involved there. So it was finally annexed by China’s Qing Dynasty in 1683. There were attempted invasions over the next couple of centuries by Japan and the French, but Taiwan remained in Chinese control for a good two centuries. Then at the end of the war between China and Japan in 1895, Taiwan was ceded to the Japanese Empire, where it remained until the end of World War II.
So it was loosely affiliated with China until the late 1600s, then solidly part of China for a couple centuries, then occupied by the Japanese for 50 years. So it made sense in 1945 that it would go back to China.
In July 1945, The US, UK and China agreed to the Potsdam Declaration. Among its many provisions was that the islands of Taiwan would be restored to China.
On August 15, 1945 Japan’s Emperor accepted the terms of the Potsdam declaration and on October 25, 1945, Japan surrendered. After the surrender of Japan, Japan’s governor-general of Taiwan signed papers handing over administration of the island to General Chen Yi – a Nationalist– of the Republic of China.
One technicality, nowhere did Japan confirm in writing they were giving up their claim to Taiwan. There was no cause for concern on that in reality but it was a detail that needed to be taken care of. An i to be dotted. A t to be crossed. There were a lot of those. For example Japan technically remained at war, even after the surrender. Not something the matter in practice but you kind of wanted everyone to be clear on the point right? There were some official treaties in the works to nail down all those technical details.
Problem was while the paperwork was getting drawn up, China was having a civil war.
The Communists led by Mao Zedong an and the nationalists led by Chiang Kai-shek no longer united by an external foe, started battling for control of the country. And while the allies had been doing all the paper work with the Nationalist, the communists started winning the civil war. Mao felt confident enough to proclaim the People’s Republic of China as a replacement for the Republic on October 1, 1949 and by December 7, Chiang and the nationalists evacuated their army to Taiwan and set up a capital in Taipei. About 2 million Chinese people and soldiers made the move to Taiwan.
Meanwhile there were all those little things that hadn’t been taken care of regarding the end of the war with Japan, like compensation and rebuilding and oh actually declaring the war over.
So the Treaty of San Francisco was created to wrap up all those details and was signed by Japan on September 8, 1951. Among its many provisions, Japan formally renounced its claim to Taiwan.
Great. Except, China didn’t sign it. Because by that time there were two governments claiming to represent China, Mao’s on the mainland and Chiang’s in Taiwan. Chiang had held strong in Taiwan and with US support continued to claim to be the rightful government of China. There was some recent experience with supporting exile governments. An exile government of France had held out in England and recently been restored. So there was some feeling the same might happen in China.
Meanwhile the USSR wanted to support its communist comrades, and argued that Mao had won and so should be recognised as the legitimate government.
And meanwhile Japan just wanted China, any China, to sign something declaring the war over.
To solve that, on April 28, 1952, Japan and the Nationalist Republic of China government on Taiwan, signed the Treaty of Taipei, formally ending the war between Japan and the Republic of China in Taiwan. Not the mainland. But it was enough to satisfy Japan.
And there’s another little wrinkle to this. Back during the war in 1942– when everybody was on better terms, Soon Tse-vung, who’s sister was married to Sun Yat-sen- the man who founded the first Republic of China in 1912, signed a document along with Soviet diplomat Maxim Litvinov and US President Roosevelt and UK Prime Minister Churchill, which later became the basis of the United Nations Declaration. That document gave those four countries a special position in the formation of the UN. And so, the US, UK, Soviet Union and US were guaranteed to be on the UN’s permanent security council.
When the UN was founded in 1945, China got its seat. The civil war was just heating up and Mao hadn’t proclaimed the People’s Republic, so Chiang got the seat.
There was some talk about dual representation maybe but that ended in 1949 when the People’s Republic of China was founded and Chiang moved to Taiwan. So Chiang held on to it. And until 1971, Taiwan’s government held China’s seat at the UN.
It was a perilous situation though. Both countries strenuously called for there to be just One China. With the US and USSR facing off with nuclear weapons at the height of the cold war, it seemed unwise for a huge communist country like China to have no seat at the biggest diplomatic table in the world. It was counterproductive to world stability.
All countries wanted a better solution to this. I’m way oversimplifying her of course but that’s the general situation that led to US president Nixon secretly sending National Security Adviser Henry Kissinger to China’s Premier (under Mao) Zhou Enlai. He was often seen as the successor to Mao, and his ally Deng Xiaoping went on to govern China in the 1980s. In a talk between Zhou and Kissinger on July 9, 1971, Kissinger made clear that “we are not advocating a `two Chinas’ solution or a `one China, one Taiwan’ solution.” Zhou said “the prospect for a solution and the establishment of diplomatic relations between our two countries is hopeful”
That was good progress. Way to go Hank.
And on July 15, 1971 President Nixon announced he would visit the People’s Republic of China the following year. Remember that Nixon’s US has been fighting a proxy war against China in Vietnam. This is a huge shocking announcement.
Then, on October 25, 1971 a coalition of Soviet bloc and non-aligned countries, along with the UK and France, voted to give the People’s Republic of China the UN seat in place of Taiwan. The vote was initiated by Albania – you know Albania, the one about as big as Taiwan–
The US acted upset. But Nixon already said he would go to China. And he did. February 21st, 1972, US President Nixon began a 7 day visit to three cities in China, including a meeting with Chairman Mao Zedong. Mao told Nixon, “I believe our old friend Chiang Kai-shek would not approve of this”.
US TV audiences get a new look at China and we got the phrase “Only Nixon could go to China.”
The visit changed things for Taiwan too and got us closer to the odd situation we’re in now. The meetings resulted in the Shanghai Communique. The US acknowledged that “all Chinese on either side of the Taiwan Strait maintain there is but one China” but for now to set aside the “crucial question obstructing the normalization of relations.” Clever little diplomatic sidestep that let them be friends or at least friendlier, with both Chinas.
In fact, the US maintained formal relations with the Republic of China in Taiwan until 1979.
Because in 1978 China’s Communist Party really sweetened the deal. It declared that China was in a united front with the US, Japan and Western Europe against the Soviet Union. It supported US operations in communist Afghanistan against the USSR-supported regime there, and China conducted a military expedition against the US’s old nemesis Vietnam. China did all that.
So what are you going to do? On January 1st, 1979, US President Jimmy Carter and Zhou’s old friend Chairman Deng Xiaopeng issued the Joint Communiqué on the Establishment of Diplomatic Relations. This ended US recognition of the Republic of China in Taiwan and established formal relations with the People’s Republic of China. It also ended the Mutual Defense Treaty with the government on Taiwan.
So the US just up and abandoned Taiwan? No.
Not everybody was pleased with the President just ending the defense pact with Taiwan.
You see, the Mutual Defense Treaty with Taiwan had been passed by the Senate in 1954, and the Senate, particularly Senator Barry Goldwater, figured they were the only ones who could undo that. So Senator Goldwater brought a case to the Supreme Court as Goldwater v. Carter. But the court basically said our name’s Paul this is between y’all.
It issued a dismissal based on the fact that the case was a political matter not a judicial one and would not rule on it. The legislative and executive branches need to work this out amongst themselves. In fact, Justice Powell wrote in a concurrence that if the Senate had issued a resolution objecting to the dissolution then it would become a matter for the courts. The Senate had drafted a resolution but did not vote on it.
So that’s what the US Congress did. It went to work on making some laws. And on April 10, 1979, the US enacted the Taiwan Relations Act.
It defined how the US sees Taiwan separately from the People’s Republic of China and has shakily guided international relations around the two countries for decades.
The act refers to the “governing authorities of Taiwan” avoiding the whole issue of who gets called Republic of China. It did not restore diplomatic relations with Taiwan nor did it recognise its government. Doing either of those would have undone the last decade of warming relations instantly.
So no, we will not recognise Taiwan’s government. Instead the Act said Taiwan would be treated under U.S. laws the same as “foreign countries, nations, states, governments, or similar entities”. And the American Institute in Taiwan will not at all be an embassy but it can do anything embassies do. And all agreements made with Taiwan’s Republic of China before 1979 stay in effect with the governing authorities of Taiwan. Except the mutual defense pact.
Which you’re probably thinking was Senator Goldwater’s whole sticking point right? Yes. So here’s what the Act did do. It said “the United States will make available to Taiwan such defense articles and defense services in such quantity as may be necessary to enable Taiwan to maintain a sufficient self-defense capability”, and “shall maintain the capacity of the United States to resist any resort to force or other forms of coercion that would jeopardize the security, or social or economic system, of the people on Taiwan”
In other words, don’t call it a country, but treat it like a country, don’t call it an embassy but use it like an embassy and don’t call it a defense pact, but make sure Taiwan is defended.
And crucially through all of this, never once has the US recognized the People’s Republic of China’s sovereignty over Taiwan.
This approach has been called Strategic Ambiguity.
And it worked. Sort of. It still pissed off China. China’s official position is that the Taiwan Relations Act is “an unwarranted intrusion by the United States into the internal affairs of China.” Deng Xiaopeng viewed the US as insincere. A feeling carried on and amplified by subsequent leaders. And over the years, the PRC drifted away from being united with the US against the USSR to aligning with developing nations.
But the US has not backed off of the strategic ambiguity of the TRA.
It reaffirmed the TRA with a nonbonding-resolution in the 1990s, a Congressional Research Service Report in 2007, and a concurrent resolution in May 2016.
And for its part Taiwan has pursued its own strategic ambiguity. You’d think it would have declared itself an independent country but it hasn’t. never declared its independence. In its early days this made sense. You don’t declare independence from something that doesn’t exist. In Chiang Kai-shek’s view, his was the legitimate government of China. There was nothing to declare independence from,
However since the US recognition of the People’s Republic of China, Taiwan’s insistence on this point–the one china policy– has lessened. If it ever abandoned that policy, theoretically Taiwan could just declare itself Taiwan, not China, and seek recognition from world governments.
China would not be OK with that.
Worried about the rising possibility of Taiwan admitting reality, China passed a law on March 14, 2005, restating that there is only one China, Taiwan is part of it, it’s illegal for Taiwan to secede from China, all means to peaceful reunification should be pursued, that under unification, Taiwan would get a lot of autonomy, BUT if Taiwan declares itself independent, or is taken over by another country, OR if all possibility of peaceful unification is lost, China will take non-peaceful actions. The law also states if it does go “non-peaceful” it must do so while protecting Taiwanese civilians and foreigners as much as possible, as well as Taiwanese interests in the PRC. Pay attention to that last. Because that includes Foxconn and TSMC plants.
Yeah about that. Why are so many Taiwanese companies operating in China.
Relations between Taiwan and China cooled off quite a bit in the 1990s and the two decided to ignore their diplomatic differences and focus on economic ties. By 2002, China was Taiwan’s largest market for export.
China hosts around 4,200 Taiwanese enterprises and more than 240,000 Taiwanese work in China. This dependence on China’s economy has been described as a blessing and a curse. On the one hand it has made Taiwan dependent on China, which gives the People’s Republic leverage over it. On the other hand, close economic ties make military intervention more costly.
Taiwan’s economic success is largely down to tech. The Taiwan Semiconductor Manufacturing Co. or TSMC founded in 1987 has a market cap equal to 90% of Taiwan’s GDP. It is in the top 10 largest companies in the world by Market Cap and a bigger semiconductor manufacturer than Intel or Samsung.
TSMC’s customers include Apple, Qualcomm, Nvidia, Broadcomm, AMD, Ampere, Microsoft, MediaTek and Sony. It makes about 60% of the world’s semiconductors.
Other major tech companies headquartered in Taiwan include Acer and Asus which make devices like Phones, laptops, PCs and more. And Foxconn- which also lists on the stock market as Hon Hai, and is famous for assembling Apple products in its mainland China-based factories, but also makes products for Microsoft, Amazon, Google and Huawei with factories located in Brazil, India, Vietnam and all over Southeast Asia.
Taiwan makes the most important part of arguably the most important devices for the world’s economy.
OK that’s not even a very deep look at Taiwan but it’s still a lot, so let’s summarize.
Taiwan’s current government originated on mainland China as one side of a civil war. Taiwan operates under the fading narrative that it is the true government of China. Only 12 countries, mostly in Micronesia and the Caribbean, have full diplomatic relations with Taiwan.
However it’s de facto treated like a country by the US and others but not fully recognised, as a way to placate mainland China which asserts that Taiwan is just a breakaway province that needs to be reunified.
Since the 1990s, economic interests have superseded diplomatic disagreements to the benefit of pretty much everybody. China got Taiwanese investments. The US got a cheap place to buy parts and assemble electronics. And Taiwan became dominant in the chip industry.
Not to oversimplify the country’s economy but Taiwan is the engine that drives chipmaking. If Taiwan’s companies suddenly disappeared, it would be a LOT harder to make electronics ANYWHERE in the world.
And the US has been able to pull off a magic trick keeping mainland China happy while sheltering Taiwan.
BUT the “strategic ambiguity” is beginning to wear thin. A stricter regime in China is pressing the issue more and is less placated by economic benefits.
From here, you need experts in international relations to explain things to you. But hopefully you have a good grip on the basics with which to understand what’s going on.
In other words, I hope you Know a Little More about Taiwan.
CREDITS
Know A Little More is researched, written and hosted by me, Tom Merritt. Editing and production provided by Anthony Lemos in conjunction with Will Sattelberg and Dog and Pony Show Audio. It’s issued under a Creative Commons Share Attribution 4.0 International License.

About Mastodon

KALM-150x150"

Could an open source alternative finally spell the end of Twitter’s short-form bulletin board style postings? It’s been tried before, but now there’s perhaps the best contender for the Twitter throne yet: Mastadon.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to [email protected]

Episode transcript:

Twitter.

“This website.”

A platform that people love to hate.

A platform people love to yell “I’m leaving.”

But they always come back.

They left for Pownce. And they came back. And Pownce died.

They left for Plurk. And they came back. And Plurk went niche.

They left to start whole new protocols like Diaspora and Identica.

But every time they fly to an alternative, they also fly back.

Except. Maybe this time? Is this the exception?

Because this isn’t the story of Twitter. This is the story of a place that has almost all the ingredients to keep the Twitter exodus.

Let’s help you know a little more about Mastodon.

It’s March 2007. The SXSW Interactive festival in Austin, Texas is filled with Web 2.0 pitches and internet stars. But one website is standing out. Twitter, with a giant screen in the convention center showing “tweets” as they happen in real time, steals the spotlight.
That feels like the last time people unanimously loved Twitter.
In 2009 a group calling itself the “Iranian Cyber Army” hacks Twitter through a DNS exploit. It won’t be the last time Twitter gets hacked. But it will cause one of the first big rounds of questions about whether Twitter is safe. It won’t be the last time that happens either.
In 2011. Twitter posts are credited with fueling uprisings in the Middle East known as the “Arab Spring” BUT it also launches the “Quick Bar”, a floating bar at the top of the iOS app which was withdrawn after loud user complaints.
Then there was #gamergate and the vitriolic arguments and harassment about what was true journalism and who were fake gamers. That led to calls for better Twitter moderation.
You get the idea. Controversies happen frequently on Twitter and when they do, users storm off to try an alternative.
And in 2016, the US presidential election made every one of those controversies look mild. A month out from election day, a European developer decided to be the latest to try to make yet another Twitter alternative. Maybe this would be the one.
Mastodon launched October 5, 2016 Developer Eugen Rochko posted on Hacker News, “Show HN: A new decentralized microblogging platform” It linked to a Github page.
There you could find the basic code of yet another decentralized social network. It stood out from previous attempts in a couple ways. First, it was truly open source, not a proprietary service pretending to be decentralized. Anybody could set up a Mastodon server. And second, it was polished. It looked like a slick implementation of Tweetdeck.
Developers in general were complimentary and several jumped in to help work on the project. Since anybody could set up a Mastodon server, lots of them did, showing that it could work as a truly federated and decentralized platform.
You just needed people to use it.
It simmered away with people wandering in as they heard about it in various corners of the internet. But that calm period changed. In March 2017.
The rancor on Twitter had been snowballing since the election of President Trump. It seems like background noise now, but at the time it was overwhelming. People got angry on Twitter before but not in the numbers and not with this kind of sustained rage. Almost every user on the platform was picking a side and firing shots at the other.
That may explain why a seemingly innocuous change began another exodus.
On March 30th, Twitter announced that the names of people you are replying to would not count against the character count, and if you replied to more than one person, only the first person’s name would show with the rest available with a click.
Minor stuff right. WRONG
With everyone angrily replying to each other and laser-focused on shaming their opponents by name, this was seen as hiding important information! In reality, this was the straw that broke the camel’s ability to stay on Twitter.
So when big names like IT Crowd creator Graham Lineman (aka Glinner) and Community and Rick and Morty creator Dan Harmon started accounts on Mastodon, a wave began. Motherboard’s Sarah Jeong had been working on an article about the little platform and found herself documenting a mass migration.
Mastodon users jumped 70% in 48 hours and Rochko met his $800 a month Patreon goal. Jeong posted her Motherboard article on April 4th. Mashable’s Jack Morse posted the same day with the title “Bye, Twitter. All the cool kids are migrating to Mastodon.” A few days later, April 7, Quartz and The Verge both had published guides on how to use Mastodon.
By April 9, 2017 Mastodon had 129,302 accounts. Nothing compared to Twitter’s hundreds of millions, but a hockey stick-like growth that caught people’s attention.
Rochko’s main instance, Mastodon.social had to lock registrations to encourage new users to sign up on one of the other 1,200 or so servers.
Mastodon was having its moment. Like Pownce, and Plurk and identi.ca and Diaspora before it.
And almost as quickly as it began. It ended.
The pattern held. The Twitter faithful got mad. The Twitter faithful fled. The Twitter faithful realized that they still liked yelling on Twitter and returned.
By May 22, the headline on the Verge was “What happened to Mastodon after its moment in the spotlight?”
Thankfully for Rochko and friends the story was more Plurk than Pownce. The flood had stopped but there was still growth.
The Verge’s Megan Farokh-manesh described it as a grab bag of “personal observations, video games, politics, comics, and a mix of users speaking in French, Japanese, Spanish, and more.”
In fact it was now a cozy community. Slightly bigger than it had been a few months before but the better for it. The Twitter masses had gone.
For now.

Let’s take a minute to look at how Mastodon works. Because it’s not exactly a Twitter clone. And it points out some of the reasons Mastodon is seen as a good Twitter alternative, and also what its actual road blocks are to becoming massively popular.
Mastodon’s code is issued under the AGPLv3 open source license built on the W3C ActivityPub standard. That’s a standard used not just by Mastodon but other federated services like PeerTube for video, Pixelfed for images and Friendica, another social networking alternative.
But the point here is Mastodon is standards compliant. ActivityPub is a World Wide Web Consortium standard, like HTML.
Mastodon’s open source code is free and the license does not allow anyone to reverse that. It is administered by a German nonprofit called Mastodon which owns the trademark and runs two servers, the original mastodon.social and mastodon.online.
Mastodon describes its federation of servers as the ‘fediverse.’
Basically, anybody can take the code and start a server if they want to maintain it. And those servers can then integrate with other servers in the fediverse as much or as little as they want. Each server will have its own policies and moderation rules. So you can be on a server and see posts on every other server but you can choose a server that plays by rules you’re comfortable with. Want maximal free speech? Find a maximal free speech server. Want strong moderation and crackdowns on offensive speech, choose a server with those kinds of policies. You can still interact with the rest of the fediverse, but with filter levels and other rules that you’re comfortable with.
So for example a Mastodon server can see all the posts in the fediverse, but a particular server may choose to ban a list of swear words. If you sign up on that server you’d see all the posts from the rest of the fediverse unless they had swearing in them. But swearing doesn’t have to be banned everywhere. If you don’t mind seeing swearing you can choose a server that doesn’t block it.
And you can also block server yourself on your own account. Don’t like the policies or perspectives of the people who post on the mastodon server blacklicorice.rocks, you can stop it from ever showing up in *your* feeds, without needing the server you’re on to block it for everyone.
Of course, most people will pick a server that has policies they agree with, so they don’t have to do a lot of maintenance and blocking. But what if you change your mind. Or pick the wrong server. Or your server changes ITS policies?
This is where another feature of the fediverse comes in handy. You’re not locked into a server. You can try one out and then change your mind and not lose your data.
Mastodon makes it possible to take your follower lists along with you. With Facebook or Twitter that would mean abandoning everything. With Mastodon it just means a couple of export and import clicks. There are one or two steps depending on what you want to keep after you move. If all you want to keep is your followers– so people find you immediately at your new server– you can do that automatically. If you want to keep who YOU follow, as well as mute lists, block lists, bookmarks, domain blocks, you need to export a file with that info and import it when you set up the new account. The point being, it’s not complicated to move from one server to another.
This is also why Mastodon usernames look like email addresses. [email protected] for example. The first part is the user name and the portion after the at symbol is the server name.
Even with the ability to switch servers, the choice makes it daunting for some people to sign up in the first place. Not just for the simple reason of having to choose, but because the various apps are still developing better ways to make it easy to see what’s available and get signed up.
Whatever server you end up on, you’ll be able to view multiple feeds. And they’re pretty familiar if you’re a Twitter user. Different servers can tweak them a little but usually there’s one for people you follow, one for interactions with your posts, one to see everything on your local server and quite often one called “Federated” which lets you see every post from every server your server interacts with.
A lot of servers also have a feed called Explore which lets you see posts from across the fediverse that are getting a lot of attention. That’s the closest Mastodon gets to a “trending topics” feed.
There are also Direct messages, Favourites and Bookmarks. Favourites let other people know you like something, bookmarks are for you to reference something later whether you “like” it or not. And you can make your own lists.
The standard message posting on Mastodon has a maximum of 500 characters. You can attach an image, run a poll, add a content warning and select a default language that the post is in. Posts were jokingly referred to as Toots in the early days, a play on Tweets and because Mastdon’s logo is a big hairy extinct elephant. While the word toot is still in use, it’s somewhat deprecated.
Posts can also have varying privacy settings. You can let a post be public across the fediverse, private to only your followers, direct between users or even unlisted, so anyone can see it if they know where to look but it won’t be discoverable.
There are some differences from Twitter too.
Search is more limited as well, with most servers only returning searches for user names and hashtags. For example, the Explore feed only follows hashtags, not individual words in posts. And Boosts, the mastodon equivalent of Retweets, do not allow you to add commentary.
One of the downsides of Mastodon’s federated approach is that not every server is as well run as every other. Large popular servers have few problems but niche communities rely on the good graces of small teams or sometimes individuals. There is no monetization built into the platforms so the folks who run servers rely on crowdfunding like donations or Patreons.
So not every server is secure, and things like posting images can become an ethical dilemma if you know each image is increasing the cost of the volunteer who runs your server.
That boils down to two things working against Mastodon’s uptake with the wider populace: ease of use and difficulty of maintenance. Hold that thought though.
The tradeoff is that you get that ability to pick and choose moderation. Something that attracted people to another run at Mastodon in 2022. This time was much bigger.

By October 2022, Mastodon had grown to 300,000 users. A little less than three times what it had during the great yet brief migration of 2017. It wasn’t booming but it wasn’t declining. Just a nice slow growing community of people. A small suburban feel.
Then. On October 27, 2022, Elon Musk closed his long embattled acquisition of Twitter.
It would be an entire separate episode to discuss all the events of Musk’s first few months owning Twitter. Lifting the ban on President Trump, firing executives, firing more people, lifting more bans, launching paid verification, unlaunching paid verification, laying off more people, making decisions by poll. And with each event, the Twitter user did what Twitter users have always done. Flee to try something else.
And there were new platforms to try like Hive and Post and platforms on the comeback trail from decline, like Tumblr. But the biggest by far was a familiar furry trunk.
Mastodon had never gone away. It was never in decline. But it had never grown like this. Between October and November 2022 it grew 800% to 2.5 million. Still much smaller than Twitter’s 350 million plus, but now in the conversation.
The holidays took some of the momentum away though as people paid less attention to whatever wild thing Twitter’s CEO was posting. And after the first of the year, CES diverted the tech world’s attention such that Musk’s antics seemed to engender less panic than they had.
By February, Mastodon users had fallen from the 2.5 million high to 1.4 million.
It looked like an old story. Twitter users angered. Twitter users flee. Twitter users get over it. Twitter users come back. Alternative platform left to pick up the pieces.
Except.
January 19, Twitter changed its API kicking off third-party clients. That left developers of the clients wondering if they shouldn’t make a Mastodon app. The folks who made Tweetbot launched Ivory. The folks who made Aviary, launched Mammoth and even got funding from Mozilla. Suddenly there were easier ways to get started with Mastodon with experienced developers who by all rights should not have been given the opportunity to do this.
And on February 10, Cloudflare, a company who makes its money securing big websites from cyberattacks and downtime, launched Widlebeest. It lets you quickly spin up a Mastodon server that supports ActivityPub and other Fediverse APIs, with the ability to publish, edit, boost, and delete posts. And of course the server will sit behind Cloudflare’s security from denial of service and other attacks. You still needed some tech chops, but it made it a lot easier for someone to get a server up and going and not worry as much about the maintenance and security.
And Fast Company had another point. Maybe Mastodon isn’t following the Twitter alternative pattern at all. Maybe it’s following the Twitter pattern.
In April 2009, two years after launch, Nielsen noted that only 40% of Twitter users still used the service. In February 2011, Forbes noted Twitter’s user base had dropped by 5 million. Even as recent as 2014, a Reuters/Ipsos poll found that 36% of people who joined Twitter said they never used it.
And yet, Twitter, even after all of the outrage– is still going.
Mastodon may or may not be a replacement for Twitter. But it may very well be a new platform in the mix, and possibly could become something totally new and unexpected.
In other words, I hope you Know A little More, about Mastodon.

About RSS

KALM-150x150"

The story of RSS is simple and yet combative. In fact RSS’s success may hinge on one man’s idealistic dedication to his principles. Tom takes you through the history of RSS.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to [email protected]

Episode transcript:

You probably use an RSS feed. In fact if you got this episode as a podcast you definitely used an RSS feed. Most people these days don’t even know they’re there. The story of RSS is simple and yet combative. In fact RSS’s success may hinge on one man’s idealistic dedication to his principles. If you’ve ever thought “why are people making this so complicated?” If you’ve ever wondered what it would be like to be a person who just shut everyone up with an action that for right or wrong would stand the test of time. Get ready to Know a Little More about RSS.

People say RSS stands for Really Simple Syndication though it really doesn’t. That’s one of the charms of the story of RSS. Throughout its formative years nobody could agree on much and the name is still a matter of debate to this day.
If you’ve heard of RSS at all, it was most likely in connection with Podcasts. Podcasts are delivered through RSS feeds to the apps and platforms where you can listen to them. Behind every Apple Podcast, Google Podcast, Audible Podcast and even most Spotify podcasts, there’s a simple RSS feed. You may also use RSS as a feed for headlines. If you use Feedly, NewsBlur or Inoreader or something like that you’re using RSS.
But where did RSS come from? Oh my friends. Be prepared for a tale of idealism, abandonment, betrayal and perseverance. It is the tale of RSS.
In the earliest days if you wanted to know if a website had been updated you had to visit it. As websites became more common this became a chore. So people experimented with ways to let you know when a website had been updated, without you having to go there. One of the earliest attempts at this was the Meta Content Framework or MCF, developed in 1995 in Apple’s Advanced Technology Group.
Ramanathan V. Guha was part of that group and a few years later, he moved over to browser-maker Netscape, where he and Dan Libby kept working on these sorts ideas. Guha particularly liked developing Resource Description Frameworks, or RDFs, similar to the old MCF he worked on at Apple. They were complex ways to show all kinds of things about web pages without having to visit them.
But Netscape’s team was of Guha, Libby and friends was not alone. And early on they weren’t he most likely to succeed. The Information and Content Exchange standard, or ICE, was proposed in January 1998, by Firefly Networks — an early web community company– and Vignette- a web publishing tool maker. They got some big names to back ICE too. Microsoft, Adobe, Sun, CNET, National Semiconductor, Tribune Media Services, Ziff Davis and Reuters, were among the ICE authoring group. But it wasn’t open source. In those days respectable tech companies like those I just named, still cast a skeptical eye on open source code. How were you supposed to make money on it? Who would keep working on it if they weren’t paid? So the members of the ICE authoring group paid people to develop it. And in the end that meant it developed slower than competing standards.
Interestingly, ICE’s failure caused Microsoft to get a little more open, a little earlier than you might expect. In 1997 Microsoft and Pointcast created the Channel Definition Format, or CDF. They released it on March 8, 1997 and in order not to fall under the death by slow development that ICE seemed to, they submitted it as as standard to the W3C the next day.
It was adopted quickly and in fact its success planted the seed of its successor. Dave Winer had founded a software company in 1988 called UserLand. UserLand added support for CDF on April 14, 1997 one month after its release. Winer also began publishing his weblog, Scripting News in CDF. But CDF, like ICE, was more complicated than a smaller site needed. So on December 27, 1997, Winer began to publish Scripting News in his own scriptingNews feed format as well. He just simplified CDF for his own needs and made that available for anyone who wanted to use it to subscribe.
Meanwhile Libby had been working away at his own version of a feed platform and Netscape was about to make a big launch that would cause his project to surpass them all. On July 28, 1998, Netscape launched My Netscape Portal, This was one of the earliest Web Portals. A place that aggregated links from sites around the Web. You could add sites you wanted to follow, like CNET or ZDNet and then see their latest posts all in one place.
Netscape kept the links updated with a set of tools developed by Libby. He had taken a part of an RDF parsing system that his friend Guha had developed for the Netscape 5 browser, and turned it into a feed parsing system for My Netscape. He called it Open-SPF at the time, for Site Preview Format.
Open-SPF let anyone format content that could then be added to My Netscape. It was rich like CDF, open like CDF but had one advantage over CDF. It worked on My Netscape, which suddenly everyone wanted to be on.
Netscape provided it for free because that meant the company didn’t have to spend time reaching deals for content. You want your content on My Netscape, use Open-SPF, it can be there. That meant there was more content available for My Netscape than was usual on curated pages. The content was free for both the users and Netscape. More content meant more users and more users meant Netscape could serve more ads. And content providers were willing to create the Open-SPF feeds, because they weren’t burdensome to create and the sites got more visitors who saw their content on My Netscape and clicked on links to come to their sites.
Sound familiar? This arrangement is the one Google still tries to rely on for Google News. Except the news publishers have changed tunes. Back then they were all about bringing visitors to their websites and happy that Netscape sent folks their way for free.. But as the years have passed and revenue has shrunk, now they’re more about getting Google to pay them for linking to their news.
Anyway back to the rise of Netscape.
1999 is not only the end of the millennium. It’s not only when everyone actually got to party the way Prince had been asking them to pretend to party. 1999 was a huge year for RSS. It was about to reach its modern form and become something users of RSS today would recognise. By name.
On Feb. 1, 1999 Open-SPF was released as an Engineering Vision Statement for folks to comment on and help improve.
Dave Winer commented that he would love to add Scripting News to My Netscape but he didn’t have time to learn Netscape’s Open-SPF. However because he had his own self-made feed format using XML he’d “be happy to support Netscape and others in writing syndicators of that content flow. No royalty necessary. It would be easy to have a search engine feed off this flow of links and comments. There are starting to be a bunch of weblogs, wouldn’t it be interesting if we could agree on an XML format between us?”
However by Feb. 22, Scripting News was publishing in Open-SPF and available at My Netscape. Feeling like it was a success, Libby changed the name of Open-SPF to refer to the fact that it used RDF, calling it the RDF-SPF format and released specs for RDF-SPF 0.9 on March 1. Shortly after release he changed the unwieldy name to RDF Site Summary, or RSS for short. Thus begins the first in a parade of meanings for RSS
And the new name took off. Carmen’s Headline Viewer came out on April 25th as the first RSS desktop aggregator and Winer’s my.UserLand.com followed on June 10th as a web-based aggregator.
Folks liked the idea obviously, but a lot of RSS enthusiasts thought the RDF was too complex, Dave Winer among them. Libby hadn’t ignored Winer’s earlier offer either. In fact, Libby thought they weren’t really using RDF for any useful purpose. So he simplified the format adding some elements from Winer’s scriptingNews, and removing RDF so it would validate as XML. This was released on July 10, 1999 as RSS 0.91.
Some folks write that the name changed to Rich Site Summary at that point but Winer wrote at the time “There is no consensus on what RSS stands for, so it’s not an acronym, it’s a name. Later versions of this spec may say it’s an acronym, and hopefully this won’t break too many applications.”
Anyway by 1999, like Toy Story, RSS is on a roll. Libby is bringing in feedback from the community and creating a workable usable standard that is reaching heights of popularity beyond just the confines of My Netscape.
Like some kind of VH1 Behind the Music story, as it reach that’s height, everything fell apart.
Netscape would never release a new version of RSS again.
In the absence of Netscape’s influence, two competing camps arose.
Rael Dornfest wanted to add new features, possibly as modules. That would mean adding more complex XML and possibly bringing back RDF.
Dave Winer preached simplicity. You could learn HTML at the time by just viewing the source code of a web page. Winer wanted the same for RSS.
On August 14, 2000, the RSS 1.0 mailing list became the battleground for the war of words between the two camps.
Dornfest’s group started the RSS-DEV Working Group. It included RDF expert Guha as well as future Reddit co-founder Aaron Swartz. They added back support for RDF as well as including XML Namespaces. On December 6, 2000 they released RSS-1.0. and renamed RSS back to RDF Site Summary.
Not to be left behind, two weeks later On December 25, 2000, Winer’s camp released RSS 0.92.
Folks, grab your steaks knives. We have a fork.
In earlier days, Libby, or someone at Netscape, would have stepped in. In But AOL had bought Netscape in 1998 and had been de-empahasizing My Netscape. They wanted people on AOL.com. And if they didn’t care about Netscape, they cared even less about RSS. In fact they actively did things that could have ended RSS. In April 2001, AOL closed My Netscape and disbanded the RSS team, going so far as to pull the RSS 0.91 document offline. That document was used by every RSS parser to validate the feeds. Suddenly all RSS feeds stopped validating. Apparently this had little effect on visitors to AOL.com or people dialing in to their internet connection, so AOL just let them stay broken. With the RSS team gone and AOL doing nothing, RSS feeds were looking dead in the water.
But the RSS 0.91 document was just a document after all. And there were copies. Anybody theoretically could host it as long as everyone else changed their feeds to validate to the new address. Dave Winer stepped up.
Winer’s UserLand stepped in and published a copy of the document on Scripting.com so that feed readers could validate. That right there won Winer a lot of good will.
An uneasy truce followed. Whether you were using Netscape’s old RSS 0.91, Winer’s new RSS 0.92 or the RDF Development Group’s RSS 1.0 they would all validate.
By the summer of 2002, things are going OK and tempers have cooled. Nelly has a hit song advising folks what to do if things get hot in here. Maybe we can solve this? Let’s try to merge all three versions into one new version we can all agree on and call it RSS 2.0. right?
Except they couldn’t agree. Winer still wanted simplicity. RDF folks still wanted RDF and the fun features it would bring. They would agree to a simplified version of RDF but they still wanted it. To make matters more confusing, Winer was discussing what should happen by blog, with everyone pointing to their own blogs. The RDF folks were talking about it on the rss-dev mailing list.
Communication, oddly in a discussion about a communication platform, was the problem. Since neither side was seeing each other’s arguments they never came to an agreement. So Winer’s group decided not to wait. On September 16, 2002, UserLand released their own spec and just went and called it RSS 2.0. AND Winer declared RSS 2.0 frozen. No more changes.
Discussions continued on the RSS-dev list but Winer’s camp got another victory when in November 2002, the New York Times adopted RSS 2.0. That caused a lot of other publications to follow suit. Further consolidating the position.
The next year in another move fending off the debate, on July 15, 2003, Winer and UserLand assigned ownership of RSS 2.0 copyright to Harvard’s Berkman Center for the Internet & Society. A three-person RSS Advisory board was founded to maintain the spec in cooperation with the Berkman Center which continued the policy of considering RSS frozen. Mic. Dropped.
There was still a resistance. IBM developer Sam Ruby set up a wiki for some of the old RDF folks, and others, to discuss a new syndication format to address shortcomings in RSS and possibly replace Blogger and LiveJournal’s protocols. The Atom syndication format was born of this process and was proposed as an internet official protocol standard in December 2005. Atom has a few more capabilities and is more standard compliant, being an official IETF Internet standard, which RSS is not. But in practice they’re pretty similar. Atom’s last update was October 2007 but it is still widely supported alongside RSS.
And RSS 2.0 kept going. In 2004 its abilities to do enclosures, basically point to a file that could be delivered along with text, led to the rise of Podcasts. Basically RSS feeds that pointed to MP3 files.
In 2005, Safari, Internet Explorer, and Firefox all began incorporating RSS into their browser’s functions. Mozilla’s Stephen Hollander had created the Web Feed icon, the little orange block with a symbol like the WiFi symbol at an angle. It was used in Firefox’s implementation of RSS support, and eventually Microsoft and Opera used it too. It was also used for Atom feeds. Stephen Hollander did what most could not. Get people interested in providing automated Web feeds to agree on something.
And in 2006, with Dave Winer’s participation, RSS Advisory Board chairman Rogers Cadenhead relaunched the body, adding 8 new members to the group in order to continue development of RSS.
Peace in the form of an orange square was achieved.
OK. So RSS has a colorful history. What the heck does it do?
That part is pretty simple. It’s a standard for writing out a description of stuff so that it’s easy for software to read and display it.
Basically you have the channel (or Feed in Atom) and Items (or entries in Atom).
RSS 2.0 requires the channel to have three elements, the rest are optional. So to have a proper feed you need a title for your channel, a description of what it is and a link to the source of the channel’s items.
Like Daily Tech News Show – A show about tech news. And a link to dailytechnewsshow.com
Optional elements of RSS are things like an image, publication date, copyright notice, and even more instructions like how long to go between checking for new content and days and times to skip checking.
The items are the stuff in the feed. There are no required elements of an item, except that it can’t be empty. It has to have at least one thing in it. So an item could just have a title or just have a link. However most of the time an item has a title, a link and a description. The description can be a summary or the whole post. Other elements of the item include author, category, comments, publication date and of course enclosure.
So for our Daily Tech News Show example title might be Episode 5634 – AI Wins, the description might be “Tom and Sarah talk about how AI just won and took over everything.” And the link to the post for that episode.
The enclosure element lets the item point to a file to be loaded. The most common use for the enclosure tag is to include an audio or video file to be delivered as a podcast.
For Daily Tech News Show that would be a link to the MP3 file.
In the end an RSS reader or a podcast player looks at an RSS feed the way your browser looks at a web page. It sees all the titles, links descriptions and possible enclosures, and then loads them up and displays them for you.
After a rather stormy opening decade, RSS has settled down into a reliable and with apologies to team RDF, simple way of syndicating info. Really Simple Syndication indeed.
Like podcasting which it provides the underpinnings to, RSS has been declared dead several times. But it just keeps on enduring. I hope you have a little appreciation for that tiny file that delivers you headlines and shows now. In other words, I hope you know a little more about RSS.

About Section 230

KALM-150x150"

What you need to know to form an opinion about Section 230, the “safe harbor” law in the US for tech platforms.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to [email protected]

Episode transcript:

Two cases are before the US Supreme Court regarding protections provided by Section 230 of the US Communications Decency Act. Gonzalez v. Google claims that a platform, in this case, YouTube, should be liable for content it recommends to users. Twitter v. Taamneh argues that Twitter provided unlawful material support for failing to remove users from its platform.
A lot of people are talking about these cases. And a lot of well-intentioned and well-informed people are going to make arguments based on misunderstandings of Section 230. So in this special episode I want to cover just what Section 230 is and what it isn’t. In other words I’ll help you Know a Little More about Section 230.

We covered the history and meaning of Section 230 in depth in the episode About Safe Harbor in July 2020. So if you want the deep dive please listen to that.
This episode will focus on how to properly explain and think about Section 230 no matter what argument you may or may not be trying to make. You may think Section 230 promotes censorship. You may think it protects big tech companies from responsibility. You may think it should be repealed. Those are all reasonable positions to take. But I often hear people argue these sorts of positions from a starting point that is wrong. I just want to give you the correct starting point from which you can make your argument.
So let’s start with the folks who say we should just get rid of it. There is a misconception that if we get rid of Section 230 companies would have to take responsibility for the content on their platform or that they would have to stop censoring. Neither one of those things is assured.
Without Section 230, ANY platform. And it’s worth pointing out this applies to a forum you might run on your own website, as well as to Facebook. Without Section 230, any platform would be seen in the eyes of the law as either a publisher of information or a distributor. A publisher is responsible for what it publishes. A distributor is not responsible for the contents of what it distributes.
The easiest way to think about this is a brick and mortar bookstore. The publisher of the books and magazines it sells are responsible for what’s in the books and magazines. The book store is just the distributor. In fact a 1959 Supreme Court case ruled that a bookstore owner cannot be reasonably expected to know the content of every book it sells. They should only be liable if they know or should have known that selling something was specifically illegal. Otherwise the publisher is liable for what’s in the book or magazine.
Now let’s think about that for a minute. The bookstore can decide what magazines to carry. But it’s not deciding what’s in the magazine. And it still isn’t allowed to sell magazines that it knows are illegal. Also of note is that letters to the editor published in the magazines are still the responsibility of the publisher. Just because a reader wrote the letter doesn’t free the publisher from liability. Because the margin publisher chose to publish it. It exercised editorial control.
So the bookstore gets protection because it’s not exercising editorial control of what’s in the books.
Fast forward to the 1990s. Compuserve and Prodigy are vibrant new parts of the internet where people are talking to each other like never before.
It’s April 1990. Sinead O’Connor’s new song Nothing oppress 2 U (written by Prince) tops the Billboard charts.
Robert Blanchard’s company Cubby Inc. has developed Skuttlebut – with a K- a database for TV news and radio gossip. It’s a new competitor for Compuserve’s Rumorville. Rumorville is published by Don Fitzpatrick Associates on Compuserve’s Journalism forum. Skuttlebut and Rumorville are in stiff competition for the burgeoning online audience that wants TV and radio news industry gossip. This is FIVE YEARS before the Drudge Report mind you.
In the heat of the competition Rumorville USA posts that Scuttlebutt has been getting info from a back door at Rumorville. And that Skuttlebutt’s owner, Robert Blanchard got “bounced” by WABC. And– and you don’t do this folks– described Scuttlebutt as a “scam.”
Blanchard and Cubby Inc. sued Don Fitzpatrick Associates, but also sued Compuserve as the publisher. But here’s the thing. Compuserve did not review Rumorville’s content. Once it was uploaded it was available. Compuserve also didn’t get any money from Rumorville. The only money it made was off the subscribers to Compuserve itself, whether they read Rumorville or not.
In Cubby Inc. v Compuserve, the judge ruled that Compuserve was not a publisher. It was a distributor. It could not reasonably know what was in the thousands of publications it carried on its service. Therefore, like a bookstore, Compuserve was not liable for what was published in Rumorville USA.
Reminder. This is without Section 230. The platform was not exercising control over the content so it was not liable for what was in it.
On to October 1994. Boyz II Men is dominating the charts with a long run at number one with “I’ll Make Love to You.”
Prodigy’s Money Talk message board is still awash in talk about the bond market crisis. And an anonymous user posted that securities investment firm Stratton Oakmont had committed crime and fraud related to a stock IPO. Stratton Oakmont files a lawsuit against Prodigy alleging the company is the publisher of the information.
So you’d think, given the Compuserve case that prodigy is in good shape. It didn’t publish the comments the commoner did.
Except. It’s been a few years, and a few raging internet flame wars later, and Prodigy, like many other platforms, has developed some Content Guidelines for users to follow. It also has Board Leaders who are charged with enforcing those guidelines. And prodigy even uses some automated software to screen for offensive language. This is all good community moderation practice right? Clear set of guidelines. Consequences if you violate them. And even some automated ways to keep some of the bad stuff from ever even showing up.
The court looked at that and said, well, looks to us like you’re exercising editorial control. You’re deciding who gets to post what. That feels a lot more like the letters to the editor than it does the bookstore. The court wrote “Prodigy’s conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability than CompuServe and other computer networks that make no such choice.”
In Stratton Oakmont v. Prodigy, the court ruled in favor of Stratton Oakmont.
After that case the law stands that courts will give you the protection of a distributor, as long as you don’t moderate. If you moderate the content, you’re on the hook for it.
So in other words before Section 230, you could either leave everything up or you’d have to be responsible for everything, meaning you’d have to pre-screen all posts. Your choice is either zero moderation or prior restraint.
Republican Chris Cox and Democrat Ron Wyden both thought this was not an ideal situation. So they wrote Section 230 of the Communications Decency Act which read “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Those are the 26 words usually cited as section 230. But that’s just paragraph 1 of subsection c. There are a lot of other subsections related to definitions and why the act is being made etc. But there’s a second subparagraph of section c which is also important. It’s called Civil liability It reads:
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)
In other words, even if it’s protected free speech, the platform can take down content it finds objectionable and not lose its protections from liability for other content.
All of this is a long way to say if the platform didn’t create the content, it’s not responsible for it. ..with a few exceptions.
This is another part of the discussion of Section 230 that gets left out. Section 230 specifically says that this law will have no effect on criminal law, intellectual property law, communications privacy law or sex trafficking law. So the DMCA for example still has to be followed. You have to respond to copyright takedown notices.
So back to the two Supreme Court cases Gonzalez v. Google and Twitter v. Taamneh.
Section 230 does not let Facebook publish anything without being responsible. It just means it’s not on the hook for what I post just because it removes other posts. It’s an interesting question whether recommendations count as content created by the platform or not. It would certainly count as editorial control before Section 230, but Section 230 was put in place specifically to allow a measure of editorial control without having to take responsibility for all posts.
It’s also an interesting question whether “terrorist” content qualifies as criminal content which Section 230 does not protect. And should Twitter have known about it and removed the accounts.
Bearing on both those questions is one more case that tested Section 230 shortly after it became law.
It’s April 25, 1995. Montell Jordan’s “This is How We Do It” tops the charts.
And someone has posted a message on an AOL Bulletin Board called “Naughty Oklahoma T-Shirts” describing the sale of shirts featuring offensive and tasteless slogans related to the Oklahoma City bombings which had happened 6 days before. The posting listed the phone number of Kenneth Zeran in Seattle, Washington who had no knowledge of the posting. He then received a high volume of calls, mostly angry about the post. Some calls were death threats. Zeran called AOL which said they would remove the post. However the next day a new post was made and new posts were made over the next four days. One of the posts was picked up by a radio announcer at KRXO in Oklahoma City who encouraged listeners to call the number. Zeran required police protection and sued KRXO and then separately AOL.
In its decision, the United States Court of Appeals for the Fourth Circuit wrote “It would be impossible for service providers to screen each of their millions of postings for possible problems. Faced with potential liability for each message republished by their services, interactive computer service providers might choose to severely restrict the number and type of messages posted. Congress considered the weight of the speech interests implicated and chose to immunize service providers to avoid any such restrictive effect.”
It also wrote that Section 230 “creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service. Thus, lawsuits seeking to hold a service provider liable for its
exercise of a publisher’s traditional editorial functions — such as
deciding whether to publish, withdraw, postpone or alter content —
are barred.”
Zeran argued that even if AOL wasn’t a publisher, it was a Distributor and under the 1959 case, a distributor would still need to be responsible for speech it knew was defamatory. And Zeran argued AOL knew, because he called them about it after the first post. The judge however says that AOL is a publisher not a distributor plain and simple. But Section 230 shields them from the liability normally afforded a publisher. So you can’t just redefine them.
This ended up as a stricter protection for a distributor than the 1959 case. Instead of having to take it down once you know about it. Internet services were given a broader shield.
And that became the principle justification for CDA 230.
And if the Supreme Court follows that precedent it might also consider recommendations to be publishing behavior and therefore protected. That’s not the only way it could rule but it is a possibility.
In the end what I want folks to take away is that Section 230 doesn’t free a tech platform to do whatever it wants. It frees a platform to choose to moderate and exercise editorial control over the posts of others without having to assume responsibility for the thousands, and now millions of posts made every day.
It’s reasonable to argue that perhaps there are some responsibilities that should be restored to tech platforms through legislation. I think it’s worth pointing out that repealing Section 230 altogether would not necessarily achieve that.
So I hope now you have a firmer basis upon which to base your opinion whatever it is. In other words, I hope you know a little more about section 230.