Frequently Asked Questions

If you have more questions, please post them on the mailing list

Why do I often see a Connection reset by peer?

java.io.IOException: Connection reset by peer
	at sun.nio.ch.FileDispatcher.read0(Native Method)
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233)
	at sun.nio.ch.IOUtil.read(IOUtil.java:200)
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
	at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:322)
	at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281)
	at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201)
	at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:619)

Since a P2P network creates many connections and closes them whenever they are not needed anymore, many TCP connections get into a TIME_WAIT state. This can be avoided by setting SO_LINGER to 0, which sends a RST (reset) instead of a normal TCP connection termination (FIN/ACK). Although messing with SO_LINGER is not recommended, this seems to be easiest way to get rid of the TIME_WAIT problem, where thousands of connections are in a TIME_WAIT state.

What kind of P2P routing mechanism and distance metric are used?

TomP2P uses iterative routing with an XOR metric similar to the one described in Kademlia paper. The protocol is binary, to have as less overhead as possible and the ID space is 160bit.

Why do I get “java.lang.IllegalArgumentException: xyz is not in hexadecimal form”?

Number160 expects a hexadecimal string in the constructor that starts with “0x”. For example "0xab123 "is a string in hexadecimal form. If you want to create a Number160 out of an arbitrary String, you have to use Number160.createHash(xyz);

Does TomP2P support range queries?

No, in order to fetch data from the DHT you need to know the exact key. However, implementation of range queries in TomP2P is not difficult and if there is demand for such a feature, I’ll add it.

Can I have multiple peers listening on the same port?

Yes, joining a master peer is a feature for running multiple peer IDs in the same VM and on the same port. We are using this for simulations and “dry-runs” for our experiments. A peer that calls listen(Peer master) attaches its listeners for incoming messages on the ports of the master peer. Since each message contains the peer Id, the master forwards it to the appropriate peer. This way TomP2P can simulate many thousand peers running on the same port and not using too many resources.

What is the difference between content and location key?

Regular DHTs typically operate with location keys only. This means that this key determines where the content is stored, thus, I chose the name location key. However, we often needed more than only to store a value and we implemented a hash map on every peer, where content can be stored. The keys in this second map are called content keys.

Here is an example:

put(0x123, "test") - The value is stored on the peer with an id close to 0x123. The content key is not specified and the default value 0 is used.

put(0x123, 0, "hallo") - The value is stored on the same peer as above, but the value “test” gets replaced with “hallo” since the content key was set to the default

put(0x123, 1, "world") - The value is stored on the same peer as above and the “world” is stored with the content key 1.

With get(), you can either get a specific contentkey get(0x123,1), or if you set the content key to null get(0x123, null), you will get all the values.

Does TomP2P provide bindings for SOAP?

No, bindings are not supported at the moment.

Its seems that nothing works, I cannot connect any peer.

Please make sure you have disabled your firewall. Even if you try to run it locally, make sure that you have full access to your network.

TomP2P crashes the JVM: A fatal error has been detected by the Java Runtime Environment.

There is a bug in the Java version 1.6.0_18. A summarized description can be found in the Netty Forum. The solution is to either upgrade to the latest version or use -XX:-ReduceInitialCardMarks

Is there a prepared Eclipse project of TomP2P on Android with all dependencies?

Yes, TomP2P_Android.zip is a zipped Eclipse project running TomP2P 3.2.9. If you have installed the ADT plugin you can import this project and run TomP2P in the Android emulator.

In my P2P network, I have a couple of peers, joining and leaving. Five peers tries store data on the DHT. When an other peers tries to fetch the data, sometimes this peers gets all five data, but sometimes it gets four or less. Why?

This is due to churn. Lets say, you have 5 peers and peer A stores data one the 3 closest peers of those 5 (B,C,D). Now 3 new peers join (F,G,H), which are all closer to the ID of the stored data. If now an other peer searches for this data on the three closest peers, it will fail. Eventually, peer A will republish (direct replication) and store its data on F,G, and H. If indirect replication is set up, the data will be handed over as soon as a closer peer is detected and peer A will only fail if the transfer is still in progress.

Since the tracker uses Bloom filters, what effects do collisions have?

The Bloom filter is used to specify which peers we are interested from a tracker. In case of those peers that are a false positive match, we don’t get those false positive peers from the tracker. To avoid such collisions, one can increase the Bloom filter size.

Is broadcasting to all peers supported?

Not at the moment. However, its not difficult to implement. If there is need for it, send me (tom at tomp2p.net) a message.

JVM fault – A fatal error has been detected by the Java Runtime Environment. What does it mean?

It means that your Java code is fine, and the JVM or other native calls has a bug. A student recently encountered on Win7, 64bit, Java6 the following hs_err_pid dump with TomP2P:


#
# A fatal error has been detected by the Java Runtime Environment:
#
#  EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x000000006da36869, pid=1288, tid=5860
#
# JRE version: 6.0_21-b07
# Java VM: Java HotSpot(TM) 64-Bit Server VM (17.0-b17 mixed mode windows-amd64 )
# Problematic frame:
# V  [jvm.dll+0x246869]
#
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#

---------------  T H R E A D  ---------------

Current thread (0x0000000068935000):  JavaThread "scheduler-0" [_thread_in_vm, id=5860, stack(0x000000006a430000,0x000000006a530000)]

siginfo: ExceptionCode=0xc0000005, reading address 0x000000000000ce38

Registers:
EAX=0x0000000000000010, EBX=0x0000000068935000, ECX=0x0000000000000010, EDX=0x0000000000000000
ESP=0x000000006a52e6a0, EBP=0x0000000000000000, ESI=0x0000000000000010, EDI=0x0000000000000000
EIP=0x000000006da36869, EFLAGS=0x0000000000010246

Top of Stack: (sp=0x000000006a52e6a0)
0x000000006a52e6a0:   0000000005dbc640 000000006da33227
0x000000006a52e6b0:   0000000068935000 00000000689351c8
0x000000006a52e6c0:   0000000000000038 000000000000ce28
0x000000006a52e6d0:   0000000068935000 0000000000000000
0x000000006a52e6e0:   0000000068935000 0000000000000000
0x000000006a52e6f0:   0000000005dbc640 0000000000000000
0x000000006a52e700:   00000000689351c8 000000006a52e7e8
0x000000006a52e710:   000000000052f950 000000006d60a474
0x000000006a52e720:   0000000000000017 000000006a52e7e8
0x000000006a52e730:   00000000689351c8 000000006a52e8e8
0x000000006a52e740:   000000006a52e750 00000000002a0b10
0x000000006a52e750:   0000000000000000 0000000000000004
0x000000006a52e760:   000007a247393c3d 000000004955e5d8
0x000000006a52e770:   0000000068935000 000000006a52e8f0
0x000000006a52e780:   0000000005b4ccb0 000000006a52e8f0
0x000000006a52e790:   0000000005b9ddc8 000000006a52e8b8 

Instructions: (pc=0x000000006da36869)
0x000000006da36859:   84 d2 49 8b c0 49 0f 45 c2 48 63 c8 8d 44 35 00
0x000000006da36869:   42 3b 04 09 0f 87 e2 00 00 00 85 f6 7e 5d 84 d2 


Stack: [0x000000006a430000,0x000000006a530000],  sp=0x000000006a52e6a0,  free space=3f90000000000000000k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [jvm.dll+0x246869]

[error occurred during error reporting (printing native stack), id 0xc0000005]

Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j  sun.nio.ch.Net.connect(Ljava/io/FileDescriptor;Ljava/net/InetAddress;II)I+0
j  sun.nio.ch.SocketChannelImpl.connect(Ljava/net/SocketAddress;)Z+162
j  org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(Lorg/jboss/netty/channel/socket/nio/NioClientSocketChannel;Lorg/jboss/netty/channel/ChannelFuture;Ljava/net/SocketAddress;)V+5
j  org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(Lorg/jboss/netty/channel/ChannelPipeline;Lorg/jboss/netty/channel/ChannelEvent;)V+165
j  org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(Lorg/jboss/netty/channel/ChannelEvent;)V+28
j  org.jboss.netty.handler.stream.ChunkedWriteHandler.handleDownstream(Lorg/jboss/netty/channel/ChannelHandlerContext;Lorg/jboss/netty/channel/ChannelEvent;)V+9
j  org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(Lorg/jboss/netty/channel/DefaultChannelPipeline$DefaultChannelHandlerContext;Lorg/jboss/netty/channel/ChannelEvent;)V+26
j  org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(Lorg/jboss/netty/channel/ChannelEvent;)V+55
j  net.tomp2p.message.TomP2PEncoderTCP.handleDownstream(Lorg/jboss/netty/channel/ChannelHandlerContext;Lorg/jboss/netty/channel/ChannelEvent;)V+9
j  org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(Lorg/jboss/netty/channel/DefaultChannelPipeline$DefaultChannelHandlerContext;Lorg/jboss/netty/channel/ChannelEvent;)V+26
j  org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(Lorg/jboss/netty/channel/ChannelEvent;)V+36
j  org.jboss.netty.channel.Channels.connect(Lorg/jboss/netty/channel/Channel;Ljava/net/SocketAddress;)Lorg/jboss/netty/channel/ChannelFuture;+39
j  org.jboss.netty.channel.AbstractChannel.connect(Ljava/net/SocketAddress;)Lorg/jboss/netty/channel/ChannelFuture;+2
j  org.jboss.netty.bootstrap.ClientBootstrap.connect(Ljava/net/SocketAddress;Ljava/net/SocketAddress;)Lorg/jboss/netty/channel/ChannelFuture;+125
j  org.jboss.netty.bootstrap.ClientBootstrap.connect(Ljava/net/SocketAddress;)Lorg/jboss/netty/channel/ChannelFuture;+27
j  net.tomp2p.connection.ChannelCreator.createChannelTCP(Lorg/jboss/netty/channel/ChannelHandler;Lorg/jboss/netty/channel/ChannelHandler;Ljava/net/SocketAddress;Ljava/net/SocketAddress;I)Lorg/jboss/netty/channel/ChannelFuture;+104
j  net.tomp2p.connection.ChannelCreator.createTCPChannel(Lnet/tomp2p/connection/ReplyTimeoutHandler;Lnet/tomp2p/rpc/RequestHandlerTCP;Lnet/tomp2p/futures/FutureResponse;ILjava/net/InetSocketAddress;)Lorg/jboss/netty/channel/ChannelFuture;+257
j  net.tomp2p.connection.SenderNetty.sendTCP0(Lnet/tomp2p/peers/PeerAddress;Lnet/tomp2p/rpc/RequestHandlerTCP;Lnet/tomp2p/futures/FutureResponse;Lnet/tomp2p/message/Message;Lnet/tomp2p/connection/ChannelCreator;I)V+83
j  net.tomp2p.connection.SenderNetty.sendTCP(Lnet/tomp2p/rpc/RequestHandlerTCP;Lnet/tomp2p/futures/FutureResponse;Lnet/tomp2p/message/Message;Lnet/tomp2p/connection/ChannelCreator;I)V+69
j  net.tomp2p.rpc.RequestHandlerTCP.fireAndForgetTCP(Lnet/tomp2p/connection/ChannelCreator;)Lnet/tomp2p/futures/FutureResponse;+27
j  net.tomp2p.rpc.HandshakeRPC.fireTCP(Lnet/tomp2p/peers/PeerAddress;Lnet/tomp2p/connection/ChannelCreator;)Lnet/tomp2p/futures/FutureResponse;+9
j  net.tomp2p.rpc.HandshakeRPC$2.operationComplete(Lnet/tomp2p/futures/FutureChannelCreator;)V+22
j  net.tomp2p.rpc.HandshakeRPC$2.operationComplete(Lnet/tomp2p/futures/BaseFuture;)V+5
j  net.tomp2p.futures.BaseFutureImpl.callOperationComplete(Lnet/tomp2p/futures/BaseFutureListener;)V+2
j  net.tomp2p.futures.BaseFutureImpl.notifyListerenrs()V+31
j  net.tomp2p.futures.FutureChannelCreator.reserved(Lnet/tomp2p/connection/ChannelCreator;)V+40
j  net.tomp2p.connection.ConnectionReservation$1.run()V+29
j  java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Ljava/lang/Runnable;)V+59
j  java.util.concurrent.ThreadPoolExecutor$Worker.run()V+28
j  java.lang.Thread.run()V+11
v  ~StubRoutines::call_stub

The solution was to upgrade to the latest JDK, in this case JDK7 and we also changed the Java code according to this suggestion, so we are not sure what exactly solved this issue.