[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [tor-talk] Cryptographic social networking project
On Sat, Jan 03, 2015 at 10:04:52AM +0000, email@example.com wrote:
> >So let's assume less than 1% of Facebook users use this.. let's take
> >a million for example. Hundreds of thousands of Tor users would then
> >be keeping hundreds of circuits open while they are interacting with
> >the Bulk data servers. What do Tor backend experts think of this
> I think you are criticizing using Tor because it can't handle load in
> mass scales. Using any other approach for anonymization have same
No, I am just suggesting not to use Tor for something it wasn't built
for. We have been working on a technology that combines anonymization
with multicast distribution and is therefore a lot better suited for
social use cases. I hoped you would see this point and maybe consider
joining forces with us rather than developing something that may run
into scalability limits.
> problem, note that only multi-hop proxies like onion routing can foil
> traffic analysis, simply distributing data across p2p nodes by assuming
> that there is no a central service provider to look at data in its
> control panel won't protect your network's metadata. The only solution
You are making rough guesses at our architecture. How can you be so
sure that we aren't doing things right? If you're trying to claim that
anonymous multicast is impossible you have a hard challenge since there
are papers going back to around 1999 stating that it can be done.
> is encouraging more volunteers to run Tor relays for increasing its
> capacity rather than saying that if millions of users try use Tor they
You want to use a shopping bag to deliver a cupboard.. and now you
say that using more shopping bags will solve the problem. You can't
solve an exponentially growing problem with a linearely growing
solution. All technologies that address scalability have a multicast
distribution strategy somewhere. In cloud technology it's the way
the database replication is organized in distribution trees. In
Bittorrent it's the way BT grows a tree with every further downloader.
Tor doesn't have that and so far I have not heard of anyone being
interested in changing this. In fact, it would be such a drastic
intrusion into its current operation mode that it would risk affecting
the current way Tor operates. That is why it is good for everyone that
other platforms like GNUnet, Tribler and I2P experiment with this
challenge and Tor developers who think Tor has reached a sufficient
degree of maturity could come and help the other platforms. I can
imagine an integration happening at some point, since all of these
platforms need a relay router network to perform well.
> By the way keeping circuits open theoretically won't computationally
> cost much CPU power for relays, only opening circuits require
> asymmetrical cryptography which is the expensive part and as i said when
> Alice open a circuit to Bob she won't drop it as long as middle relays
> are available.
Still whenever Alice uses those 167 circuits (example scenario) she is
sending the exact same information to all of those people. If our
anonymization network had native distribution trees rather than unicast
circuits, then this task would be roughly the same as when Twitter
delivers a tweet to all data centers in order to make it appear on
potentially millions of recipient dashboards.
> I'm not sure what is "round robin". You can't rely on friends as a
> remailer to deliver things, how is communication between friend1 with
> friend2 secured to ask friend2 deliver something to friend3? If you
> directly send it then ISP/government/any-other-attack-in-between can
You should ask these questions yourself. If you don't trust your
friends, why are you thinking of developing a social networking
application? It's obvious that anytime you send a message to a
group of your friends you trust each one of them not to forward
these message to people you didn't intend to. And it is a totally
normal case of social treason when they in fact do. This isn't
anything you can solve by technology. People want something like
Facebook and we can deliver them with something like Facebook
which is actually private and anonymous unless one of your friends
cheats on you. That is a huge step forward from the current situation
where everything you say is archived for all times.
> simply discover a relation for friend1-*-friend2, if you use a multi-hop
> proxy to anonymize their connection then you have to trust Tor for
> protecting metadata as we do. Also friends might be unavailable (which
> make it difficult decide send data to whom as you don't know which
> friend is going to remain unavailable for how long) and we must
> instantly deliver everything to all recipients whenever user share
> something. Furthermore your plan for handing over entire data so many
> times among friends seems much more complicated+expensive than simply
> directly sending a few byte long packet to both Bob and Bill
> synchronously (hidden service) or asynchronously (public pool) in our
Our plan is completely different from what you write here. Pubsub
distribution channels operate over the backbone, not the individual
friend systems. It is the backbone ensuring that everyone gets a copy
of the message she is supposed to get and the subscribers may not know
of each other - who they are, how many they are. I don't know why you
assume you can judge what we have been working on in the last decade,
then talk about things that have nothing to do with us.
[removing more redundancy]
> I described circumstances to generate a new Tor circuit for a new
> identity. You can load many "Blocks" using same Tor circuit but for
> saving a new "Block", circuit need to change. For instance when Alice
> post a new comment or message she change her Tor circuit before
> uploading the "Block" but all her friends to download that "Block" won't
> change their Tor circuit to "PseudonymousServer".
Which again means that the same data is being delivered in hundreds of
copies over the Tor network, rather than having a multicast strategy
that ensures data travels each network node just once at maximum, or
at least reduces redundancy to a scalable amount.
You insist on only focusing on the cost of establishing circuits, but
I don't believe Tor will be able and wanting to deal with an explosion
of redundant data deliveries. There is a reason why Bittorrent is
discouraged over Tor - because it is the same social use case. Tor
scales for a steadily growing number of humanoids that make unicast
exchanges with websites and other server-like applications. It's a
linear challenge that a slow increase in efficiency and number of
relay nodes can tackle.
The moment all of these users start interacting with each other like
crazy, Tor has a problem. I don't understand why I have to tell and
re-tell these basics of scalability as if it was my opinion. This
is how scalability works, or rather doesn't work. If we want an
anonymization platform that can scale socially, we have to make one.
"The greatest shortcoming of the human race is our inability to
understand the exponential function."
> With more relays and adding more security layers (e.g padding etc) in
> the future, if Tor team overcome software bugs then there is no need to
> worry about deanonymization, at least for majority of users.
With a completely different message distribution scenario it also makes
sense to rethink anonymization.
> One of the main protections against global adversaries with controlling
> both ends of circuits, is exerting very high delay for TCP packets (yet
> unknown how long) that makes loading web pages very slow which disturbs
> users who are waiting to view entered URLs (which even make difficulties
> for relays themselves as keeping data for applying delay requires large
> storage buffers...), but in our case as users don't know when a new
> post/comment might appear in their timelines, it won't disturb them if
> app display posts with some delay. So if later on Tor team provide a
> different relay software for volunteers to launch a new parallel onion
> routing network beside their current low latency network, for
> applications like us that are able to endure inconvenience of delays
> then we surly will look into adopting it for more protection against
> global adversaries.
You just described another one of the good reasons why Tor isn't the
appropriate tool for the job we want to get done. Low latency is a
client/server-paradigm requirement that unnecessarily reduces the
anonymity for the use case of a distributed social network.
> Now Tor is not our concern, we are looking for possible mistakes in our
> own software, such as crypto parts, functional bugs, key management et
> cetera. If you found any problems on those sections please let us know.
But that is the problem of the architecture. You are building a neat
application that doesn't fit the foundation you are putting it on.
But feel free to ignore my feedback. I just wrote to you because you asked
for feedback and because I thought we could be doing better things together.
Now you have an assessment that your plan will likely not work out for
a relevant number of participants and you are free to find out the hard
way or teach me something about scalabilty after working with it for ...
hmm.. when did I start working on IRC's multicast? That's 25 years ago now.
So good luck proving me that I got it all wrong.
tor-talk mailing list - firstname.lastname@example.org
To unsubscribe or change other settings go to