Google Backtracks, Says Its AI Will Not Be Used for Weapons or Surveillance

My reading notes for Code Fellows


Google Backtracks, Says Its AI Will Not Be Used for Weapons or Surveillance

If Google’s mission and values state “do no evil”, it’s important to define whose idea of evil they are referring to. Evil can exist on a sliding scale of opinion and circumstance. Regaerdless of how google’s system’s are being used for surveillance, the question seems to be about whether our world should have these technologies at all. Posessing AI surveillance tech and weaponizing it is akin to owning a hammer and using it to harm a person. The potential is there. How can our world be made better or worse by this technology? What are the tradeoffs? What opinions are they based on?

Will democracy survive big data and AI?

We are designed to live in the wild, not filter our worlds through our technologies. The technologies of today have existed for a fraction of a seond of humanity. These technologies do already exist and have the power to influence in ways we have not yet imagined. Without proper guideance, one could hijack this relatively unregulated influence mechanism. Understanding what we should or should not do with them can be a wing-flapping party at the butterfly garden, and who is given the opportunity to wield that power might potentially only be available to a priveledged few people.