BoingBoing
SUBSCRIBE STORE
  • SEARCH
  • STORE
  • Blog : The posts
  • Forums : Read the rules
  • Store : Wonderful Products (Contact Support)
  • Newsletter : Daily wonderful things
  • About Us : Writers and staff
  • Contact Us : Get satisfaction
  • Advertise : Thank you for reading
  • Privacy Policy : The data you generate
  • TOS : What you agree to
  • Thumbnails : Youtube Thumbnail generator
  • happy mutants There's a literal elephant in machine learning's room Cory Doctorow
  • AI Two years later, Google solves 'racist algorithm' problem by purging 'gorilla' label from image classifier Cory Doctorow
  • scholarship Adversarial patches: colorful circles that convince machine-learning vision system to ignore everything else Cory Doctorow
  • privacy "Edge AI": encapsulating machine learning classifiers in lightweight, energy-efficient, airgapped chips Cory Doctorow
  • happy mutants Tiny alterations in training data can introduce "backdoors" into machine learning models Cory Doctorow
  • security Researchers think that adversarial examples could help us maintain privacy from machine learning systems Cory Doctorow
  • competition "I Shouldn't Have to Publish This in The New York Times": my op-ed from the future Cory Doctorow
  • scholarship Machine learning classifiers are up to 20% less accurate when labeling photos from homes in poor countries Cory Doctorow
  • security Towards a method for fixing machine learning's persistent and catastrophic blind spots Cory Doctorow
  • happy mutants A 40cm-square patch that renders you invisible to person-detecting AIs Cory Doctorow
  • security Towards a general theory of "adversarial examples," the bizarre, hallucinatory motes in machine learning's all-seeing eye Cory Doctorow
  • security Machine learning models keep getting spoofed by adversarial attacks and it's not clear if this can ever be fixed Cory Doctorow
  • scholarship Using structured encryption to search protected photos in the cloud Cory Doctorow
  • AI Google makes machine learning image classifier available to the public Cory Doctorow
  • AI Adversarial examples: attack can imperceptibly alter any sound (or silence), embedding speech that only voice-assistants will hear Cory Doctorow
  • security Researchers trick Google's AI into thinking rifles are helicopters, without any knowledge of the algorithm's design Cory Doctorow
  • AI Google's AI thinks this turtle is a rifle Cory Doctorow
  • scholarship Researchers can fool machine-learning vision systems with a single, well-placed pixel Cory Doctorow
  • AI Techniques for reliably fooling AI machine-vision classifiers Cory Doctorow
  • scholarship The first-ever close analysis of leaked astroturf comments from China's "50c party" reveal Beijing's cybercontrol strategy Cory Doctorow

Read the rules you agree to by using this website in our Terms of Service.

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.

Boing Boing uses cookies and analytics trackers, and is supported by advertising, merchandise sales and affiliate links. Read about what we do with the data we gather in our Privacy Policy.

Who will be eaten first? Our forum rules are detailed in the Community Guidelines.

Boing Boing is published under a Creative Commons license except where otherwise noted.

    • Mark Frauenfelder
    • David Pescovitz
    • Rob Beschizza
    • Carla Sinclair
    Editors
    • Jason Weisberger
    Publisher
    • Ken Snider
    Sysadmin
    • About Us
    • Newsletter
    • Contact Us
    • Advertise
    • Forums
    • Shop
    • Shop Support
Please support Boing Boing!
Get all the day's posts in one ad-free email for just $5 a month.
Subscribe now!