[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [tor-talk] Games Without Frontiers: Investigating Video Games as a Covert Channel
Please correct me if I'm misunderstanding you. I think you don't buy some
subset of the following implicit (I believe to be reasonable) assumptions
that we make:
(1) There is no collusion between application developers and censors.
(2) There is a secure application distribution medium that the censors
(3) Crypto attacks against authenticated, encrypted, and integrity
protected channels are not possible.
In general, the security community has agreed that (3) is a fine assumption
to make. Now there is the question of whether (1) and (2) are reasonable.
I think if we're ever going to succeed in making a good
"look-like-something" protocol, we're going to have to assume that (1) and
(2) hold for the cover application. This assumption has been made in the
past -- e.g., assuming the integrity of the Skype binary, etc. The idea of
"look-like-something" protocols completely fall apart when the cover
application does not obey (1) and (2).
Now, do these hold in the real-world, for video games? I think so. I
haven't seen any evidence (yet) that they do not. The leak you mention says
nothing about the NSA having back-doors and open attack surfaces in the
software. It just reveals that they're monitoring in-game behavior
(something that they cannot do with Castle if we can distribute passwords
out-of-band). I suspect that getting backdoors to all RTS games (past and
future) is completely non-trivial and very expensive for a censor and
Castle will continue to work reasonably well until this happens.
On Thu, Mar 26, 2015 at 3:06 AM, Jon Tullett <firstname.lastname@example.org> wrote:
> On 20 March 2015 at 05:45, Rishab Nithyanand <email@example.com>
> > Hey all,
> > I just thought I'd share and get feedback about some recent work from our
> > team at Stony Brook University.
> Interesting, thanks!
> I do question one of the early assumptions, though: "Many games also
> include the notion of private games between a limited number of
> players which may only be accessed using a password. This means that,
> even a highly motivated adversary (e.g., one who is willing to run a
> game client themselves) still cannot observe the game state."
> That seems to be making risky assumptions. Chiefly that the only
> possible attack is via an external game client - this may be mistaken:
> an adversary could attack many places: by attacking or subverting the
> game client software itself, by attacking the game network, by
> attacking the operator of the game (eg: Blizzard, in the case of WoW,
> etc), and so on.
> We shouldn't be surprised to find the likes of the NSA attacking
> gaming communities, because they are large communities, often overly
> trusting of their environment (notably the client software), and
> frequently with central control built in.
> For example: http://www.propublica.org/documents/item/889134-games
> You could mitigate some of that, sure. You could choose a less popular
> game (ie: less targeted), with open source client and server software
> (though you'd have to review it too, which is probably beyond the
> skill of most users), which operates in encrypted peer to peer
> fashion. And you can use behavioural steganography as your paper
> describes. Keep raising the bar, I guess. But a lot of that sounds
> like security by obscurity, and a skilled adversary should be able to
> attack that. Any opsec leak, and that castle would fall down fairly
> fast, I suspect.
> Still, fun research. Literally :)
> tor-talk mailing list - firstname.lastname@example.org
> To unsubscribe or change other settings go to
tor-talk mailing list - email@example.com
To unsubscribe or change other settings go to