Artificial intelligence tool ‘as good as experts’ at detecting eye problems

Artificial intelligence tool ‘as good as experts’ at detecting eye problems

Machine-learning system can identify more than 50 different eye diseases and could speed up diagnosis and treatment



Photo by Marten Newhall 

The AI system developed by DeepMind with Moorfields eye hospital and University College London is capable of referring patients with 94% accuracy.

A new machine-learning system is as good as the best human experts at detecting eye problems and referring patients for treatment, say scientists.

The groundbreaking artificial intelligence system, developed by the AI-outfit DeepMind with Moorfields eye hospital NHS foundation trust and University College London, was capable of correctly referring patients with more than 50 different eye diseases for further treatment with 94% accuracy, matching or beating world-leading eye specialists.

“The results of this pioneering research with DeepMind are very exciting and demonstrate the potential sight-saving impact AI could have for patients,” said Prof Sir Peng Tee Khaw, the director of the NIHR Biomedical Research Centre at Moorfields eye hospital and the UCL Institute of Ophthalmology.

The two-stage AI system takes a more human-like and intelligible approach to analysing the highly complex optical coherence tomography (OCT) scans of patient retinas. These are commonly used to triage patients with sight problems into four clinical categories: urgent, semi-urgent, routine and observation only.

Five separate machine-learning systems, trained using 877 clinical OCT scans, first create maps of the OCT scans. The five maps are then analysed by a second series of five machine-learning systems, trained on maps created from 14,884 OCT scans from 7,621 patients, which interpret the maps and each give a referral decision.

The referral decisions are combined into one result, with a confidence rating expressed as a percentage. The maps and any differing or ambiguous results can be shown visually to a clinician for their own interpretation and explanation of the referral result.

Most other AI-based systems essentially appear as a black box; data is fed in one end and the result is outputted from the other, with no way to check how the system came to its decision.

“The number of eye scans we’re performing is growing at a pace much faster than human experts are able to interpret them,” said Dr Pearse Keane, a consultant ophthalmologist at Moorfields eye hospital. “The AI technology we’re developing is designed to prioritise patients who need to be seen and treated urgently by a doctor or eye care professional. If we can diagnose and treat eye conditions early, it gives us the best chance of saving people’s sight.”

The two-stage approach also makes the systems more adaptable to different OCT machines, which produce images with differing characteristics. Only the mapping system would need to be retrained for different machines, leaving the existing referral system intact.

The next stage is to put the AI system through clinical trials and regulatory approval before it can be used in hospitals for patient referrals. If granted approval, the system will then be available for use across all of Moorfields’ sites for five years.

The researchers said the intelligible AI system could also be used to help train clinicians, and that Moorfields could use it for future non-commercial research efforts, as well as the de-identified dataset used to train it.

Experts said AI systems such as those created by the researchers had the potential to help clinicians treat more patients and make the NHS’s limited resources go further.

Robert Dufton, the chief executive at Moorfields Eye Charity, said: “The need for treatment for eye diseases is forecast to grow, in part because people are living longer, far beyond our ability to meet the demand using current practice.

“Artificial intelligence is showing the potential to transform the speed at which diseases can be diagnosed and treatments suggested, making the best use of the limited time of clinicians.”


Security flaws in mobile point-of-sale systems spell money trouble More like point-of-fail systems.

Security flaws in mobile point-of-sale systems spell money trouble

More like point-of-fail systems


AUGUST 9, 2018 4:00 PM


Security researchers discovered vulnerabilities in mobile card readers. Square and the other companies involved say they’ve fixed the holes.

James Martin/CNET


Cheaper payment systems may cost businesses less, but they could’ve wound up costing customers more.

That’s the word from a pair of security researchers, who discovered that mobile payment systems had vulnerabilities that could let hackers steal credit card info or change the value of what people pay.

Researchers Leigh-Anne Galloway and Tim Yunusov of cybersecurity company Positive Technologies revealed their findings at the Black Hat security conference in Las Vegas on Thursday.

Point-of-sale terminals, such as credit card readers, are increasingly a common target for hackers, since that’s where the money is. Cybercriminals can steal troves of financial data from weak cybersecurity on these terminals, and attacks on the systems have affected millions of people at hotels, stores and restaurants.

Attackers are specifically looking at inexpensive card readers, which have exploded in popularity as small businesses like your local food truck use them to accept noncash payments.

These mobile readers often attach to another device, like a smartphone or a tablet. Researchers estimate that 46 percent of all noncash payments will be done through a mobile reader by 2019.

Galloway and Yunusov looked at readers from the most popular mobile point-of-sale, or mPOS, providers in the US and Europe: Square, PayPal, SumUp and iZettle.

The researchers said they wanted to examine how much security there was in mobile readers that cost less than $50. It turns out, Galloway said, physically they were really hard to get into, but as far as cybersecurity goes, they found a few holes.

Three of the readers mentioned had a flaw that could’ve let a dishonest merchant change what customers see on the screen. That meant the device could show that a transaction failed when it really didn’t and prompt customers to pay twice. The vulnerability opened up various possibilities for merchants to steal from customers.

“It’s possible, if you were a fraudulent merchant, you could change the transaction value to make it a higher value than what’s displayed on the reader,” Galloway said in an interview before Black Hat. “The significance is that this a realistic attack vector because so many transactions are carried out through swipes.”

The display could also be adjusted to ask customers to use the magnetic stripe on the credit card, instead of the more secure chip. That would make victims vulnerable to attacks already associated with swiping cards.

Many mPOS terminals use Bluetooth to connect to devices, and the Positive Technologies researchers found that most of them didn’t use a secure form of pairing.

In a secure protocol, Galloway said, Bluetooth devices could be associated with a password, or with a notification that lets people know what gadgets they’ve connected to wirelessly. Galloway and Yunusov said this wasn’t being implemented in the readers they looked at.

“You might just never know if someone was an attacker [and] walked into your cafe and connected to your reader,” Galloway said.

The vulnerabilities hadn’t been used by any attackers yet, the researchers said. If you’re concerned, your best bet is to stay away from swiped transactions and stick to the security chips, which offer better protection.

Square said the vulnerabilities were only on the Miura M010 Reader, a third-party sales system that connected to Square’s software.

“As soon as we became aware of a vulnerability affecting the Miura Reader, we accelerated existing plans to drop support for the M010 Reader, and began transitioning all these Square sellers to a free Square Contactless and Chip Reader,” a Square spokesperson said.

Miura Systems Chairman Andrew Dark downplayed the attack’s potential, saying you’d need “a skilled attacker being present” to pull it off. He also said these vulnerabilities were only on an older version of Miura’s readers and have been fixed since 2016.

In a statement, SumUp said there haven’t been any attacks because the vulnerabilities relied on magnetic stripes instead of security chips. The company said it’s fixed the vulnerabilities mentioned. PayPal and iZettle also said they’d fixed the discovered vulnerabilities.

Yunusov said he and Galloway first informed the affected companies in April.

Samsung Pay: What you need to know (FAQ)

Samsung Pay: What you need to know (FAQ)

Here’s how Samsung Pay works and what you need to use the mobile payment system.


JULY 27, 2018 4:30 PM PDT


Samsung wants its mobile payment system, Samsung pay, to replace the plastic cards in your wallet.

It works in almost all stores — including those that use older magnetic-stripe point-of-sale terminals — without merchants needing to opt in to any program or update hardware.


Where is Samsung pay available?

It’s currently available in (deep breath) the Us, UK, Australia, Belarus, Brazil, Canada, China, Hong Kong, India, Italy, Hong Kong, Malaysia, Mexico, Puerto Rico, Russia, Singapore, South Korea, Spain, Sweden, Switzerland, Taiwan, Thailand, United Arab Emirates and Vietnam.

Which phone, bank, card and carrier do I need?

Samsung Pay works with the Galaxy S9 ($675 at, S9 Plus, Note 8, Galaxy S8 ($530 at, S8 Plus, S7, S7 Edge, S6 Edge Plus, S6, S6 Edge, S6 Active and Galaxy Note 5 ($320 at It’s also available on smartwatches: the Gear S2 (only for transactions on NFC terminals), Gear S3, Gear Fit 2 and Gear Sport.

All major carriers in the US support Samsung Pay: AT&T, Cricket, metropcs, Sprint, T-Mobile, Verizon and US Cellular.

You will need a Visa, mastercard, Discover or American Express card. See the full list of supported banks and cards in the chart below

Where can I use Samsung Pay?

Samsung claims that its system will work with almost all point-of-sale systems: NFC, magnetic stripe and EMV (Europay, mastercard and Visa) terminals for chip-based cards.

It won’t work with readers where you need to physically insert your card into a slot, however, such as those found at gas stations and on atms.

Using Samsung Pay in stores around San Francisco, I found that it was accepted almost everywhere. These included vendors using Square readers; NFC terminals in major chain retailers like Trader Joe’s and Walgreens; and magnetic stripe readers in smaller stores. Merchants may still require you to sign a receipt for the transaction.



What does the setup process involve?

Once the Samsung Pay app is installed, register your fingerprint on the device if you haven’t done so already. The camera will launch so you can scan your credit or debit card. Check that the number, name and expiration date are all correct. Finally, the app will need to verify the card by sending you an SMS or email from your bank.

A total of 10 cards can be added to Samsung Pay.

How does it work?

Samsung Pay uses near-field communication (NFC) technology to process payments at tap-to-pay terminals.

The system also works with almost all other magnetic stripe terminals as well. These older systems are widely deployed throughout retailers in the US. Samsung Pay uses what it calls magnetic secure transmission (MST) when the phone is held against one of these registers. The phone emits a magnetic signal that simulates the magnetic strip found on the back of a credit or debit card.

In 2015 Samsung acquired mobile payment company looppay, which developed the specific MST technology used for Samsung Pay.


How do I make a payment with Samsung Pay?

From the lock screen, swipe up from the small Samsung Pay bar. Select the card you want to pay with, then place your finger on the fingerprint scanner to verify your fingerprint, use the iris scan or PIN. Hold the back of the phone against the payment terminal.


You’ll see a transaction notification pop up at the top of the screen.

If you’re using a debit card through Samsung Pay, you may still need to enter the card PIN on the terminal. Once the payment is made you will get a notification that confirms the merchant name and the amount of your purchase. This information is also documented in the Samsung Pay app.

How does it differ from Apple Pay or Google Pay?

The main difference is that Samsung Pay works at almost all stores that accept credit or debit cards, not just those with tap-to-pay NFC terminals. Samsung Pay also offers a cash back feature with a number of participating retailers and Chase Pay users can also link the digital wallets with Samsung Pay. Here is a chart that compares the three services.


Wide variety of banks:

Full list here; paypal

Wide variety of banks:

Full list here

Wide variety of banks:

Ull list here; paypal

Is it secure?

Samsung Pay does not store the account or credit card numbers of cards on the device, instead using tokenization for transactions. Each time a purchase is made, the Samsung Pay handset sends two pieces of data to the payment terminal. The first is a 16-digit token that represents the credit or debit card number, while the second piece is a one-time code or cryptogram generated by the phone’s encryption key.


What if I lose my phone?

Payments can’t be made from your phone without being authorized via fingerprint, PIN or iris scan. If you register with Samsung’s Find My Mobile service you can remotely erase information on the phone, including any cards stored in Samsung Pay.


Can I use Samsung Pay even without a Wi-Fi or cellular connection?

Yes, although you will only be able to make 10 payments without the device being on Wi-Fi or cellular data. You will also need an active internet connection to add a card and to access transaction history.


Can I use Samsung Pay overseas?

For US customers, if you can use your card overseas then it’s likely it will also work with Samsung Pay when you travel. The caveat is that if you try to add a card while you are traveling outside the US, you may have to contact your bank.


Does Samsung Pay also work for returns?

Yes. Merchants may require you to hold the phone against the payment terminal in the same way as when you make a payment to process the return. Also, you will be asked to match the payment information on the receipt with the last four digits of your virtual card number. This is accessed through the Samsung Pay app.

Pixel perfect: These modeling agencies don’t hire real people

Pixel perfect: These modeling agencies don’t hire real people

by Kaya Yurieff   @kyurieff
July 30, 2018: 9:21 AM ET

Brenn’s first Instagram post shows her staring into the camera with light eyes that look just a bit too far apart. Her dark skin glows, her pursed lips glisten, and she wears her short, curly hair tousled just so.

Thumb through her feed and you’ll see her modeling a strappy black bikini. She’s got a pensive black and white portrait, too. It’s all perfectly lit and expertly edited.

A casual glance might leave you thinking Brenn is just another model promoting herself on Instagram. But look more closely, especially at the eyes, and you’ll see her secret: Brenn is the latest computer-generated creation of fashion photographer Cameron-James Wilson.

Wilson created a viral sensation earlier this year with Shudu, who is known as the world’s first digital supermodel. She looks so realistic that brands like T-shirt company Soulsky asked her to promote their products before Wilson had made it clear Shudu is just pixels on a screen. That hasn’t stopped his creation from amassing more than 130,000 Instagram followers.


A photo of Brenn

Brenn and Shudu are among a growing number of computer-generated models beginning to rival some of their real-world counterparts in popularity. Lil Miquela, created by the Los Angeles startup Brud, has more than 1 million Instagram followers — on par with real-world models like Devon Windsor and Belle Lucia. And so it was perhaps inevitable that Wilson would launch an agency dedicated to representing his digital creations. He calls it The Diigitals.

Related: Instagram star isn’t what she seems. But brands are buying in

Wilson is following the British agency Irmaz Models into the space. The company, which launched in April, creates bespoke computer-generated models. “Brands can specify the look they’re exactly after, down to the race, gender and hairstyle,” said Philip Jay, the former Playboy photographer who leads Irmaz Models alongside Irma Zucker.

The agents “representing” these models say their work creates new opportunities for branding and advertising while giving clients greater control over their images. And they never have to worry about a popular model missing work, being unavailable, copping an attitude, or getting older.


Several of the digital models created by Irmaz.\

Looking beyond marketing, Wilson sees fashion houses that design clothing using 3D modeling being particularly interested in creations like Brenn. “The best way to showcase that would be on a 3D model,” Wilson said.

Real-world models exist in three dimensions, of course, raising the possibility that their pixelated peers could put them out of work. Jay concedes that’s a possibility, but says it’s nothing new. “There’s always a new model on the block,” he says.

Wilson is less worried because he plans to work only with companies “pushing forward with new technologies” in which his models make sense. He mentions VR as a possibility. And he always asks himself the same question when evaluating potential clients: “Is there a conversation that they want to spark about technology, fashion and the future?”

Both men say they plan to create a diverse collection of models. “The modeling world is generally dominated by white females unfortunately, but we might sort of change that a little bit,” Jay said. Shudu and Brennare black, and Brenn is a plus-size model. Wilson even created a pointy-eared, long-necked alien named Galaxia. Irmaz has an alien, too, and both agencies feature male models as well. “It’s about people of all shapes, sizes, and ethnicities,” Wilson says.


Wilson’s models: Brenn, Galaxia and Shudu.

Related: Snapchat is fighting Instagram for celebrity users

Renee Engeln, a Northwestern University professor and psychologist who studies body image, notes that real women of all shapes, sizes, and ethnicities already exist. And she’s troubled by the question of what allowing brands to create their “perfect” model will do to body image and self-esteem, particularly among women.

“There is no world in which this is good for women’s health,” she said. “To know that women are going to be comparing themselves to women who … are literally inhuman strikes me as some kind of joke that isn’t very funny.”

But Bill Wackermann, the CEO of Wilhelmina Models, considers digital models a novelty at best. Consumers, he said, want a “personal connection through real eyes, a real expression.”

Stare into Brenn’s eyes and you begin to see why he’s not worried. “These are marketing ploys that get attention and generate interest,” he said. “It’s a fickle business which moves on to something else really quickly.” So even Brenn might soon find herself pushed aside by the next new model on the block.





Would you purchase a social robot for your home?

Would you purchase a social robot for your home?

CNET member Anthony P. is a big fan of robotics and is curious to find out if others would be interested in a social robot for their home.

JULY 27, 2018 12:43 PM PDT

Is it time to become friends with robots? In this edition of CNET Member Asks, jbs_Reptile AKA Anthony P. from San Francisco is a big robot fan.

He just bought his first robot to help clean his house, but he’s also fascinated by robots that could take on a more social role in the household. Read what he has to say below.、


jbs_Reptile: “I have a deep interest in technology and robotics. I love watching videos I see on social media showcasing Boston Dynamics robots such as Spot and Atlas. I remember seeing a robot called ASIMO in a Honda commercial years ago. It was in a history museum and I thought it was cool how something so technologically advanced was in the presence of artifacts. Fast forward to today, I own my first robot (Roomba) and I am getting excited about the abundance of soon to be household robots. I hear rumors of a project “Vesta” from Amazon which will be a household robot. I envision a robot that will be able to do numerous tasks from getting a beer from the refrigerator to cleaning my pad.

Knowing Amazon I am sure Alexa will be built inside and hope it has appendages. I am not a huge fan of what I call the “kid” version of household robots like Jibo or Kuri. They can do a basic task such as control your smart devices and take pictures but lack appendages to perform a human-like task. I think people will open up to the idea of having their own robot in their household. Most people I know own a Roomba, a smart assistant, or some sort of smart tech. You are starting to see them in public places like Pepper. She has been in malls to individual stores being able to recognize human emotions and interact with humans in a meaningful way. Robots will be able to make our day to day lives easier by freeing up time. I’m wondering if people would consider a social robot in their home?


Manuel Flores/CNET

Owning your own social robot is a conversation piece on it’s own. Friends and family will be inclined to socialize with the robot due to the uniqueness. The humanlike nature and emotional recognition will make it easier for people to communicate and relate. Think about being at a party and you have people who speak different languages and maybe most are not socializing due to being shy. A social robot can help break the ice by translating speech to people, making jokes, doing some sort of cool trick that warms up the crowd.”





New research from Ben-Gurion University of the Negev in Israel, that previously showed how easy it is to hack 3D printed drones, is proposing the use of “audio fingerprints” to help 3D printing avoid cyber-attacks.

The team’s research is valuable to concerns surrounding the security of 3D printing – a discussion that has tremendous value in industrial additive manufacturing sectors such as aerospace, automotive and defense.\


A sabotaged quadcopter’s 3D printed propeller breaks during flight from the Ben-Gurion University of the Negev dr0wned study. Image via Yuval Elovici/Ben-Gurion University of the Negev

How does that sound?

To start the Ben-Gurion University study, researchers explain “that in FDM technology, the geometry of a printed object is defined by the movements of four stepper motors,” – three for X/Y/Z axes and one for filament extrusion. When 3D printing, these stepper motors generate a unique sound which is directly related to the specifics of the 3D modeled object, i.e. small features/layers yield short, high pitched noises and longer layers create a more prolonged sound.


Example audio fingerprints of two “benign” (unmodified) 3D printed cubes. Image via Ben-Gurion University of the Negev

As such, a perfect version of an object as it is 3D printing will emit a very specific sound. An imperfect version with, for example, internally embedded gaps or voids will sound different.

The Ben-Gurion University team’s idea is to record the sound of a perfect, 3D printed object, and use this as a “master audio fingerprint.” Each time the same object is 3D printed, the sounds of the stepper motors are recorded, and this is compared real-time to the master file to ensure it matches up.

Great variation between the wave patterns of the audio files therefore signifies a potential flaw in its structure. Once detected, prints are stopped in progress saving time and material waste.


Comparison of a master audio fingerprint (blue) and the audio recorded from a part that has been sabotaged. Image via Ben-Gurion University of the Negev

“Highly efficient in detecting cyber-physical attacks”

By using this method, the team have successfully detected 6 potential sabotage attacks of 3D printed parts, including voids, different layer thickness, scale of the 3D printed object, X, Y or Z orientation changes, and fill pattern modification.

The amount of extruded filament however and a temperature difference, they are not detectable by audio fingerprint – though these prints are likely to fail from the offset anyway.


Process of verification of 3D printer audio fingerprints. Image via Ben-Gurion University of the Negev

Conclusions state that “the proposed detection method is highly efficient in detecting cyber-physical attacks that aim to modify the object’s geometry or the printing process timing.”

The full results of this study, titled “Digital Audio Signature for 3D Printing Integrity“, are published, open access, in IEEE Transactions on Information Forensics and Security journal. The paper is co-authored by Sofia Belikovetsky, Yosef Solewicz, Mark Yampolskiy, Jinghui Toh and Yuval Elovici.

RealNetworks gives away facial recognition software to make schools safer

RealNetworks gives away facial recognition software to make schools safer

Former music giant hopes its program will help protect kids on campus.


JULY 17, 2018 6:00 AM PDT



SAFR recognizes a face through the connected camera.

A happy emoji with a score of 80 out of 100 appeared on the screen as Max Pellegrini, president of RealNetworks, smiled into the camera. His name and age also appeared on the screen.

Pellegrini is giving a demonstration of the former music giant’s latest venture, facial recognition software designed to make children on school campuses safer. RealNetworks’ program, called SAFR, was released Tuesday for free download on the company’s website for kindergarten through 12th grade schools in the US and Canada.


President of RealNetworks Max Pellegrini demonstrates how SAFR works.
Marrian Zhou/CNET

The move to facial recognition marks a radical transformation for RealNetworks, formerly a streaming-music service before a series of lawsuits, reorganizations and assets sales forced the company to look elsewhere for growth. The software is intended to combat a dramatic rise in school shootings, as a fierce debate continues over how to keep children safe from on-campus gun violence.

“When tragedies like the Parkland [shooting] happened, it just seemed to us as parents that we can do something good for society [with this technology],” said Rob Glaser, CEO of RealNetworks and former Microsoft executive. “One of the things we heard from the schools was that [they] don’t have a lot of budget, so we say let’s just create a version for free that any school can use.”

As companies wrestle with privacy and convenience issues related to facial recognition, I sat down with Glaser and Pellegrini to discuss what makes their software different.


How does it work?

The program isn’t intended to interact with the children. It’s for adults, specifically staff and parents. It’s been in testing for the past six months, guarding students at the University Child Development School in Seattle, where Glaser’s own children attend.

The software is mainly used during school time when entrance is restricted to protect students’ safety. After adults register their face and name at an iPad kiosk, school gates automatically open when connected cameras verify their identity in the school’s database. Adults can opt out of the system and buzz in manually, but Glaser said that roughly 300 to 400 people voluntarily registered in the system.

The school’s front desk used to monitor cameras and buzz people in, but the process usually took a long time, said Paula Smith, head of school at UCDS.

“Our school is in a very urban area and huge density [of population] come into this neighborhood. We weren’t able to give families badges and have them be lost,” Smith said. “It’s nice to have this software and have access [for staff and parents]. It also frees up the front desk to take care of the kids.”

Real says its facial recognition technology has a 99.8 percent accuracy for Labeled Faces in the Wild test, a database of face images designed for studying the problem of unconstrained face recognition. The company’s algorithms received a high ranking from the National Institute of Standards and Technology, a research agency under the Department of Commerce.

About data security and privacy

To address privacy concerns, SAFR encrypts all facial data and it can be deployed in the cloud or used locally without internet, Real says.

“Data stays in the school; there’s nothing in the cloud,” Pellegrini said. “We get statistics, but we don’t get the faces.”

To put it in simple terms: A fingerprint is generated for each face, and the software matches it to information stored in a database hosted by the school. This way, if one school database is hacked, the malware is unlikely to spread.

However, the program does face challenges, especially as children’s age differences come into consideration.

The pilot at UCDS registered only staff and parents in the system because its students are in kindergarten through fifth grade and always escorted by adults. But when it comes to teens who are more independent, it’s unclear how the technology can navigate challenges like parental approvals and privacy protection for minors.

“We’re going to learn how schools that are further up in the age range want to use the system,” Glaser said. “You might think a high school would register kids in the system so they can come and go.”

In regards to privacy concerns, Pellegrini emphasized that the company isn’t interested in making money off of data collection.

“We don’t want to be [in] the game of the government security,” he said. “We want to be very respectable [to] privacy.”

Real makes its SAFR free to schools, but it has ambitions to make money by looking beyond schools.

The next step

After launching the software for schools, Real looks to tap into commercial markets, introducing premium versions of SAFR this fall.

Pellegrini said that its facial recognition technology can apply to many public places — office, stadium, gym club, concert hall, movie theater and more.

The company is also thinking about launching a premium version for schools, Glaser said, adding that the free version will stay fully functional.

“This is our launch to get [onto] the map, to get visibility,” said Pellegrini. “[At the same time,] we want to tackle something that’s very meaningful.”