It goes without saying that everyone involved in Synthetic Futures, us included, believe in the benefits our technologies can bring to society. But still, the potential for unethical use of synthetic media is a big issue facing our industry.
You hear concerns voiced not just from regulators or journalists but also from friends and family. ‘What does this mean?’ people ask. ‘So, we won’t be able to trust anything we see from now on?’. ‘Will someone be able to video call me pretending to be my daughter and swindle me out of my money?’
While some of the concerns run well ahead of the reality of the tech, you can forgive people for being worried. And you can’t help but wonder how much better Facebook, Twitter etc might be if they had been a bit more concerned about the ethical impact of their technology as they were building it rather than trying to tack it on afterwards.
With all this in mind, we at D-ID decided to roll up our sleeves and set to work creating an ethical framework we could work and live by and which we hope will be the start of a wider set of industry guidelines.
We have just launched the results which you can view here.
We learned from the process of creating this pledge, which we also thought would be worth sharing with others in our space. So, if you’re setting out on a similar journey, here are five of the issues you are also likely to face:
1. What even IS ethical?
When you’re a global business you need a global ethics policy.
But unfortunately, values vary massively across the world. From the German or Swiss who are hyper-concerned about privacy to the more laissez-faire Middle East, attitudes to privacy are not unified. (And that’s even if you exclude the People’s Republic of China, which is a whole other article.)
You will have to tread a line that works for you.
2. Money vs. Morals
The commercial challenge is one of the biggest. If you’re a start-up, like D-ID, there are tremendous pressures to grow FAST, and it feels unnatural to put any blockers on that. Everyone will find their own limits, but where those limits are won’t necessarily be obvious.
For example, you might not want to work with cigarette manufacturers, but how about vape companies? What about when they’re the same company?
3. When is consent needed?
There are certain Synthetic Media services such as voice cloning where consent is relatively straightforward. But when you’re dealing with faces, it’s less so. For example, one of our big, early hits was powering MyHeritage’s DeepNostalgia which allowed people to bring to life images of family members who are no longer with us. By definition, dead people can’t give consent, but to us, the social good of being able to connect people more closely to their family outweighed that issue. And the result was surprisingly emotional – you can view some of the responses here.
4. The whole company
The other big lesson we learned was that it’s a project which touches on every part of the business from product and tech to marketing and sales. Without their buy-in you risk one of two scenarios: guidelines which are so weak as to not be worth creating or, perhaps worse, making a promise you can’t live up to.
5. You won’t be able to do it all
You will need other people in the ecosystem to play their part. It’s all very well embedding a digital watermark to make the provenance of a piece of media clear. But unless that watermark is read by the main platforms, it’s close to useless.
Despite all these head-scratchers, the process of building our pledge was more than worth it. Not only did we learn a lot about where we really stood on certain issues, it also forced us to put in place processes and procedures which we might not have otherwise considered.
The whole point of Synthetic Futures is to share thinking and shape a positive future for synthetic media, so, if this is a step you’re also considering, we’d love to hear from you.