François Chollet's "Notes to Myself on Software Engineering" is posed as reminders from Chollet to himself, but they're a really wonderful list of extremely sensible advice on how collaborative projects work, how to be a good collaborator, how to build things for other people to use, and how to collaborate with future engineers and builders who will some day want to hook things up to the thing you're making.
For example: "Code is also a means of communication across a team, a way to describe to others the solution to a problem. Readable code is not a nice-to-have, it is a fundamental part of what writing code is about;" and "Users are focused on their own specific use case, and you must counter this with a holistic and principled vision of the whole project. Often, the right answer is to extend an existing feature."
I'm especially taken with his thoughts on APIs, like "Simple things should be simple, complex things should be possible. Don’t increase the cognitive load of common use cases for the sake of niche use cases, even minimally."
Then there's the section on having a good and worthy career in technology: "When making any choice with long-term repercussions, place your values above short-term self-interest and passing emotions — such as greed or fear. Know what your values are, and let them guide you." That is very good advice.
Notes to Myself on Software Engineering [François Chollet/Medium]
(via Four Short Links)
Generative Adversarial Networks use a pair of machine-learning models to create things that seem very realistic: one of the models, the "generator," uses its training data to make new things; and the other, the "discerner," checks the generator's output to see if it conforms to the model.
Gltr is an MIT-IBM Watson Lab/Harvard NLP joint project that analyzes texts and predicts whether that text was generated by a machine-learning model.
For several years, I've been covering the bizarre phenomenon of "adversarial examples (AKA "adversarial preturbations"), these being often tiny changes to data than can cause machine-learning classifiers to totally misfire: imperceptible squeaks that make speech-to-text systems hallucinate phantom voices; or tiny shifts to a 3D image of a helicopter that makes image-classifiers hallucinate a rifle
Seems like drones are doing a lot of jobs these days, from reconnaissance to delivery. Now, we can add “keeping the Death Star safe” to that list. Whether you’re a drone enthusiast or a Star Wars fan, these Star Wars Propel Drones are undeniably the coolest toy around. Yes, that’s a fully functional drone replica […]
It’s spring clearance time for the Boing Boing Store, when some of the best deals from the holidays return even cheaper than before. From top-rated apps to educational software to the cutest record player of all time, they’re all back with a little extra incentive. Shop your heart out before tax season wraps up! Use […]
Big companies want automation on a big scale. Doing that means diving into the tricky world of machine learning and data science. And no matter what platform you’ll be implementing it on, you can learn how with the Machine Learning & Data Science Certification Training Bundle. In 48 hours and through eight courses, this bundle […]