I have two Java applications deployed on tomcat. For inter application communication I provide jar file of application (Java API) to other application and it becomes a method call which is fast.
Now I am thinking of creating rest service, can it add some extra time. Of course yes but how much in worst case. Keeping in mind applications are on intranet.
How can I measure network latency? which I think will be most time taking part.
Need expert advice/opinion, how can we take design decisions in such cases?
TL;DR latency is unpredicatable in general.
Start with investigating worst case. If REST host goes down, a call will only return after TCP timeout (configurable, defaults can reach 10 minutes).
To prevent this ensure, you have adequate timeouts set on client’s side and be ready to handle failures.
Latency also depends on distance, hardware, software and payload.
Hardware, OS and distance delays can be measured by simple
ping command, but web server and TCP connection usually introduce additional drastic delay which sometimes depends on payload too (static resources, for example, are easier to handle).
You can try various techniques to measure latency of mocked web-server, but be aware, that it will change significantly when you introduce your actual business logic.
To answer your question, there are plenty of services out in the web which measure your site’s performance, but you will have to use them each time you change business logic.