just a note to the programmers on here: if your work involves ai, neural networks, or other expensive algorithms designed to replace human judgement, consider how much it's truly meant to be more equitable and/or efficient and whether it meets that goal. if it doesn't and you're just automating existing judgement at great expense, the purpose of including a neural network is to obfuscate culpability

this is not theoretical

really you can just go and keep making street fighter or whatever

Show thread

@triz also: if you are training it from human behavior and you haven't taken into account how that human behavior is flawed (such as being racist, with institutional racism issues like 'applicants with non-white names rejected more often' or 'redlining means no black people in the nice neighborhoods'), you're not making some big elegant solution.

you're just fucking up with extra steps

which means you're bad at your job and your programming is shit and you should feel bad for being bad.

@triz 95% of all AI stuff is devoted to predicting the stock market, and is thus doomed to failure.

@triz Yeah, and why do that stupid Silicon Valley stuff when you could be using logic programming and Lisp macros to roughly approximate human reasoning capabilities?

@pizza_pal i mean my point was more about how it enables redlining without anyone having to actually get caught with a map with literal red lines drawn on it, and other ways it perpetuates institutional and societal bias, and specifically racism. but enjoy your programmer jokes i guess

@triz Oh yeah, no kidding! Not to make jest of it, those techniques and practices have a demonstrated tendency to preserve or amplify existing inequalities--the practice of redlining is one of the best examples of this, I think.

Silicon Valley is only interested in surveillance capitalism. I try to think about technology that would be useful for a better society, less hierarchical, jobs guarantee, not doing lots militarized racial violence and shit, etc.

@triz I only use neural nets for stupid stuff.

Like making a person look like pasta.

@triz i actually used to work on, like, one of the few morally neutral applications for ML! i don’t want to say what it is, but it’s a domain where each problem generally has an objective answer, ML techniques blow non-ML techniques out of the water, and ML algorithms can solve the problem far faster than a human ever could with comparable accuracy.

... i wound up quitting anyway because i was worried about what my project could be used in.

@hierarchon @triz So, when the larger picture was considered, it wasn't necessarily morally neutral.

Good on you for leaving. I wish we'd all make such judgement calls.


The problem is there are several notions of equitable that are themselves in conflict and impossible to reconcile without robust discussions involving non-computer scientists

I've made the focus of my ML work to enhance rather than replace human judgement with the understanding that only humans are equipped to make the necessary tradeoffs and even then not all humans.

@zzz @triz

I have believed for a long time that engineers need a background in the liberal arts, in order to ask questions like "to what end?" or "sez who?"

@publius @triz

While this true, I think many projects would benefit in getting input from the non-engineer stakeholders.

@zzz @triz

Undoubtedly. But that requires things like identifying who those folks are. Being able to put your work in context helps ease, & make sense of, the process of doing all that.

@publius @triz

I think you underestimate the very real incentives in an organisation for engineers and management to not have ethical discussions. I say what needs to happen but the incentives will need to be enforced from outside the organisation.

@zzz @triz

No, I don't underestimate them. They are formidable. But I think that building up the engineers to think more about their role in society, to demand ethical conduct from the people with the money, is very important ― even if we do find a way to change our society so that managers & financiers aren't mostly actual or functional sociopaths.


Ah, neural networks : What if we invented a way to give computers a primitive sense of intuition and then deployed it extensively in situations where society has discovered that it's disastrous to make decisions by intuition?

@triz we let racist robocop do the policing now, our hands are clean

he also hates poors

@triz I’m a programmer and, like…I cannot think of a single thing that benefits from artificial intelligence/neural networks over human judgment.

Anything important should not be run using AI. Anything unimportant doesn’t need it either.

If our problems are beyond the scope that individual humans can handle, we should be working on reducing the scope.

I will fight anybody about this.

Sign in to participate in the conversation
Skull Dot Website!

Skull dot website is an intentionally small instance for friends.