Business

The Download: China’s social credit law, and robot dog navigation

This is today’s edition of the downloadour weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Here’s why China’s new social credit law matters

It’s easier to talk about what China’s social credit system isn’t than what it is. Ever since 2014, when China announced plans to build it, it has been one of the most misunderstood things about China in Western discourse. Now, with new documents released in mid-November, there’s an opportunity to correct the record.

Most people outside China assume it’ll act as a Black Mirror-esque system powered by technologies to automatically score every Chinese citizen based on what they did right and wrong. Instead, it’s a mix of attempts to regulate the financial credit industry, to enable government agencies to share data with each other, and to promote state-sanctioned moral values—however vague that may sound.

Although the system itself will still take a long time to materialize, by releasing a draft law last week, China is now closer than ever to defining what it will look like—and how it will affect the lives of millions of citizens. Read the full story.

—Zeyi Yang

Watch this robot dog scramble over tricky terrain just by using its camera

The News: When Ananye Agarwal took his dog out for a walk up and down the steps in the local park near Carnegie Mellon University, other dogs stopped in their tracks. That’s Agarwal’s dog was a robot—and a special one because at that. Unlike other robots, which tend to rely heavily on an internal map to get around, his robot uses a built-in camera and uses computer vision and reinforcement learning to walk on tricky terrain.

Why it matters: While other attempts to use cues from cameras to guide robot movement have been limited to flat terrain, Agarwal and his fellow researchers managed to get their robot to walk up stairs, climb on stones, and hop over gaps. They’re hoping their work will help make it easier for robots to be deployed in the real world, and vastly improve their mobility in the process. Read the full story.

—Melissa Heikkila

Trust large language models at your own peril

When Meta launched Galactica, an open-source large language model, the company was hoping for a big PR win. Instead, all it got was flak on Twitter and a spicy blog post from one of its most vocal critics, ending with its embarrassing decision to take the public demo of the model down after only three days.

Galactica was intended to help scientists by summarizing academic papers, and solving math problems, among other tasks. But outsiders swiftly prompted the model to provide “scientific research” on the benefits of homophobia, anti-Semitism, suicide, eating glass, being white, or being a man—demonstrating not only how its botched launch was premature, but just how insufficient AI Researchers’ efforts to make large language models safer have been. Read the full story.

This story is from The Algorithm, our weekly newsletter giving you the inside track on all things AI. sign up to receive it in your inbox every Monday.

The must reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Verified anti-vax Twitter accounts are spreading health misinformation
And perfectly demonstrating the problem with charging for verification in the process. (The Guardians)
+ Maybe Twitter wasn’t helping your career as much as you thought it was. (Bloomberg $)
+ A deepfake of FTX’s founder has been circulating on Twitter. (motherboard)
+ Some of Twitter’s liberal users are refusing to leave. (TheAtlantic $)
+ Twitter’s layoff bloodbath is over, apparently. (The Verge)
+ Twitter’s potential collapse could wipe out vast records of recent human history. (MIT Technology Review)

2 NASA’s Orion spacecraft has completed its lunar flyby
Paving the way to humans returning to the moon. (vox)

3 Amazon’s warehouse-watching algorithms are trained by humans
Poorly-paid workers in India and Costa Rica are reviewing thousands of hours of mind-numbing footage. (The Verge)
+ The AI ​​data labeling industry is deeply exploitative. (MIT Technology Review)

4 How to make sense of climate change
Accepting the hard facts is the first step towards avoiding the grimmest ending for the planet. (new Yorker $)
+ The world’s richest nations have agreed to pay for global warming. (TheAtlantic $)
+ These three charts show who is most to blame for climate change. (MIT Technology Review)

5 Apple uncovered a cybersecurity startup’s dodgy dealings
It compiled a document that illustrates the extent of Corellium’s relationships, including with the notorious NSO Group. (Wired $)
+ The hacking industry faces the end of an era. (MIT Technology Review)

6 The crypto industry is still feeling skittish
Shares in its largest exchange have dropped to an all-time low. (Bloomberg $)
+ The UK wants to crack down on gamified trading apps. (FT $)

7 The criminal justice system is failing neurodivergent people
Mimicking an online troll led to an autistic man being sentenced to five and a half years in jail. (economist $)

8 Your workplace could be planning to scan your brain 🧠
All in the name of making you a more efficient employee. (IEEESpectrum)

9 Facebook doesn’t care if your account is hacked
A series of new solutions to rescue accounts doesn’t seem to have had much effect. (WP $)
+ Parent company Meta is being sued in the UK over data collection. (Bloomberg $)
+ Independent artists are building the metaverse their way. (motherboard)

10 Why training image-generating AIs on generated images is a bad idea
The ‘contaminated’ images will only confuse them. (New Scientist $)
+ Facial recognition software used by the US government reportedly didn’t work. (motherboard)
+ The dark secret behind those cute AI-generated animal images. (MIT Technology Review)

Quote of the day

“It feels like they used to care more.”

—Ken Higgins, an Amazon Prime member, is losing faith in the company after a series of frustrating delivery experiences, he tells the Wall Street Journal.

The big story

What if you could diagnose diseases with a tampon?

February 2019

On an unremarkable side street in Oakland, California, Ridhi Tariyal and Stephen Gire are trying to change how women monitor their health.

Their plan is to use blood from used tampons as a diagnostic tool. In that menstrual blood, they hope to find early markers of endometriosis and, ultimately, a variety of other disorders. The simplicity and ease of this method, should it work, will represent a great improvement over the present-day standard of care. Read the full story.

—Dayna Evans

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Happy Thanksgiving—in your nightmares!
+ Why Keith Haring‘s legacy is more visible than ever, 32 years after his death.
+ Even the gentrified world of dinosaur skeleton assembly isn’t immune to scandals.
+ Pumpkins are a Thanksgiving staple—but it wasn’t always that way.
+ If I lived in a frozen wasteland, I’m pretty sure I’d be the world’s grumpiest cat too.

Related posts

Bullish or Bearish? What to expect for VC activity in Europe in 2022

TechLifely

Urine luck: these CES startups want to take a closer look at your waste

TechLifely

Flexibility is key when navigating the future of 6G

TechLifely

Leave a Comment