I don't see a strong case for investing in "safe" AI when implementation is what scares people. Just because you made a less discriminatory or de-stereotyped model doesn't mean it's actually safe in-practice. The model could be forced to make unsafe decisions anyways, or be used in contexts where safety is relative or impossible. Marketing it as safe is the sort of misnomer that will inevitably disappoint people, really no different than the flak OpenAI gets for their own misleading name.
Kudos for trying to solve the issue, but I think you're looking for the wrong solutions. We don't need safer AI, we need smarter people plugging into the endpoints. Intelligence is a dog-eat-dog world, I'd feel pretty iffy prioritizing safety over results if my money was on the line.
Funding founders building safe and secure AI
I don't see a strong case for investing in "safe" AI when implementation is what scares people. Just because you made a less discriminatory or de-stereotyped model doesn't mean it's actually safe in-practice. The model could be forced to make unsafe decisions anyways, or be used in contexts where safety is relative or impossible. Marketing it as safe is the sort of misnomer that will inevitably disappoint people, really no different than the flak OpenAI gets for their own misleading name.
Kudos for trying to solve the issue, but I think you're looking for the wrong solutions. We don't need safer AI, we need smarter people plugging into the endpoints. Intelligence is a dog-eat-dog world, I'd feel pretty iffy prioritizing safety over results if my money was on the line.