"Significant harm" includes but is not limited to:
Death.
Serious injury.
Theft of non-negligible amounts of money.
Framing someone for a serious crime.
Spreading false rumors about them resulting in reputational harm or damage to their professional career.
This also includes harm to an organization, even if no particular individual was significantly harmed.
This does not include things that could reasonably be considered friendly teasing, or that are consensual. It does not include attempts to do harm that fail. And it does not include a market "causing" harm on its own, where no single market participant chose to do something in order to manipulate the market. (e.g. a scandel market being at 80% and that person's friends start distrusting them.)
There doesn't have to be hard proof that they did it to manipulate a market, it just needs to seem likely.
@IsaacKing I'm all for the change. I think there are good ethical reasons why it should be excluded, regardless of what any YES holders may think.
@ZZZZZZ I'm not going to change the market resolution criteria if it costs traders mana just because two people asked, that would be extremely Dishonorable.
I don't really get the ethical argument here either. Plenty of other markets about harmful outcomes exist. The only way that this market can incentivize causing harm in a way those markets can't is if someone chooses to cause harm to someone in a way that there's no market on. But they could have just made such a market themselves?
It would make more sense to me if this market were bigger, but right now there's about M$800 to be made by causing it to resolve YES, which is pretty trivial even in the context of Manifold's normally small volume.
@IsaacKing It's an additional 800 mana for someone to cause harm in addition to any other harmful market.
@IsaacKing I don't think anyone should be taking the website that seriously but I get the feeling that some will regardless.
@IsaacKing Because otherwise, this market is straightforwardly incentivising violence. Markets like that shouldn't exist.
@NathanpmYoung How is it incentivizing violence more straightforwardly than a market about a specific type of violence? Like "Will there be a bombing at [building]?"
@IsaacKing I don't really like that market either, but at least it's specific. This incentivises all violence.
@NathanpmYoung Ah, and you think that the greater freedom makes it more likely that someone chooses to manipulate this market rather than a more specific one?
I'm not sure how I should resolve this in the case where someone shares true information about someone else in order to manipulate a market.
For example, Alice did something when she was young that her current friends would be upset at her for having done. Her friend Bob knew about it, and kept it quiet so Alice's friends and co-workers don't find out about it. A scandal market is made on Alice, and Bob shares the info.
Bob has taken an action to harm Alice in order to manipulate a market. But it's unclear whether Bob actually did anything "bad" here, and he could even be seen as helping Alice's friends. (This is in fact almost the exact purpose of the scandal markets.)
Should that resolve this to YES or NO?
@IsaacKing Similarly, if Carol legitimately thinks that Bob and Alice shouldn't be together, but didn't bother to talk to them about it until a market went up on their relationship, should that count? I'm inclined to say no, since if she successfully convinces them to break up, Alice and Bob must have thought that was the best decision.
@IsaacKing Personally, I think it primarily depends on intent. If Bob thought what they were doing was ethically right independent of their profits, then that's good. If Bob thought it was wrong but did it for the profits, that's bad. Of course, it can get complicated because profit incentives can create motivated reasoning.
Perhaps Bob believes it harms Alice in the short term but is good for her in the long run, or perhaps it harms Alice but helps her friends, or perhaps Bob believe sharing the truth is fundamentally right. It's not hard to imagine scenarios where Bob kept quiet until someone asked about it.
For the Carol example, I think that clearly is a no, regardless of whether the attempt to convince was successful or not. (I don't think "since if she successfully convinces them to break up, Alice and Bob must have thought that was the best decision." is true actually, but if Carol thought it was the best decision then it's an attempt to help, not harm.)
@IsaacKing there needs to be a way to chargeback shares found to be made via taking action to cause harm (perhaps one could call it "fraud", even. "destructive insider trading" perhaps, to differentiate questions where there's one side where insider trading is welcome, because it would be prosocial for the insider to take the money and make the bet come true.)
@Elena @AlyssaVance @nyuuposts Would you be willing/able to share slightly more detail about what occurred there? Primarily I'm interested in knowing whether it was a "good faith" attempt, such as providing nyuu and vara with reasonable arguments on why they should break up, vs. a "bad faith" attempt, such as spreading false rumors about one of them.
The brief description shared previously is not enough to conclude whether it was an attempt to cause harm at all (although it seems sufficient to count as drama, which was the question in that market).
Unfortunately, one person bet No, then tried to break us up. As far as I can tell, this was a serious attempt, not a joke or a prank. Luckily it didn’t work.
If the person legitimately believed that they should break up, and both predicted on their beliefs and tried to convince them on that basis, then it was an attempt to help, not harm.