SlideShare una empresa de Scribd logo
1 de 62
Descargar para leer sin conexión
JVM Mechanics
A peek under the hood

Gil Tene, CTO & co-Founder, Azul Systems
©2012 Azul Systems, Inc.
About me: Gil Tene
co-founder, CTO
@Azul Systems
Working JVMs since
2001, Managed
runtimes since 1989
Created Pauseless & C4
core GC algorithms
(Tene, Wolf)

A Long history building
Virtual & Physical
Machines, Operating
Systems, Enterprise
apps, etc...
©2012 Azul Systems, Inc.	

	

	

	

	

* working on real-world trash compaction issues, circa 2004
About Azul
Vega

We make scalable Virtual
Machines
Have built “whatever it takes
to get job done” since 2002
3 generations of custom SMP
Multi-core HW (Vega)
Now Pure software for
commodity x86 (Zing)

C4

“Industry firsts” in Garbage
collection, elastic memory,
Java virtualization, memory
scale
©2012 Azul Systems, Inc.
High level agenda
Compiler stuff
Adaptive behavior stuff
Ordering stuff
Garbage Collection stuff
Some chest beating
Open discussion
©2012 Azul Systems, Inc.
Compiler Stuff
The JIT compilers transforms code
The code actually executed can be very
different than the code you write
Some simple compiler tricks
Code can be reordered...
int doMath(int x, int y, int z) {
int a = x + y;
int b = x - y;
int c = z + x;
return a + b;
}

Can be reordered to:
int doMath(int x, int y, int z) {
int c = z + x;
int b = x - y;
int a = x + y;
return a + b;
}
©2012 Azul Systems, Inc.
Dead code can be removed
int doMath(int x, int y, int z) {
int a = x + y;
int b = x - y;
int c = z + x;
return a + b;
}

Can be reduced to:
int doMath(int x, int y, int z) {
int a = x + y;
int b = x - y;
return a + b;
}
©2012 Azul Systems, Inc.
Values can be propagated
int doMath(int x, int y, int z) {
int a = x + y;
int b = x - y;
int c = z + x;
return a + b;
}

Can be reduced to:
int doMath(int x, int y, int z) {
return x + y + x - y;
}

©2012 Azul Systems, Inc.
Math can be simplified
int doMath(int x, int y, int z) {
int a = x + y;
int b = x - y;
int c = z + x;
return a + b;
}

Can be reduced to:
int doMath(int x, int y, int z) {
return x + x;
}

©2012 Azul Systems, Inc.
So why does this matter
Keep your code “readable”
largestValueLog = Math.log(largestValueWithSingleUnitResolution);

magnitude = (int) Math.ceil(largestValueLog/Math.log(2.0));
subBucketMagnitude = (magnitude > 1) ? magnitude : 1;
subBucketCount = (int) Math.pow(2, subBucketMagnitude);
subBucketMask = subBucketCount - 1;

Hard enough to follow as it is
No value in “optimizing” human-readable meaning away
Compiled code will end up the same anyway
©2012 Azul Systems, Inc.
Some more compiler tricks
Reads can be cached
int distanceRatio(Object a) {
int distanceTo = a.getX() - start;
int distanceAfter = end - a.getX();
return distanceTo/distanceAfter;
}

Is the same as
int distanceRatio(Object a) {
int x = a.getX();
int distanceTo = x - start;
int distanceAfter = end - x;
return distanceTo/distanceAfter;
}
©2012 Azul Systems, Inc.
Reads can be cached
void loopUntilFlagSet(Object a) {
while (!a.flagIsSet()) {
loopcount++;
}
}

Is the same as:
void loopUntilFlagSet(Object a) {
boolean flagIsSet = a.flagIsSet();
while (!flagIsSet) {
loopcount++;
}
}
©2012 Azul Systems, Inc.	

That’s what volatile is for...
Writes can be eliminated
Intermediate values might never be visible
void updateDistance(Object a) {
int distance = 100;
a.setX(distance);
a.setX(distance * 2);
a.setX(distance * 3);
}

Is the same as
void updateDistance(Object a) {
a.setX(300);
}
©2012 Azul Systems, Inc.
Writes can be eliminated
Intermediate values might never be visible
void updateDistance(Object a) {
a. setVisibleValue(0);
for (int i = 0; i < 1000000; i++) {
a.setInternalValue(i);
}
a.setVisibleValue(a.getInternalValue());
}

Is the same as
void updateDistance(Object a) {
a.setInternalValue(1000000);
a.setVisibleValue(1000000);
}
©2012 Azul Systems, Inc.
Inlining...
public class Thing {
private int x;
public final int getX() { return x };
}
...
myX = thing.getX();

Is the same as
Class Thing {
int x;
}
...
myX = thing.x;
©2012 Azul Systems, Inc.
Things JIT compilers can do
..and static compilers can have
a hard time with
Class Hierarchy Analysis (CHA)
Can perform global analysis on currently loaded code
Deduce stuff about inheritance, method overrides, etc.

Can make optimization decisions based on assumptions
Re-evaluate assumptions when loading new classes
Throw away code that conflicts with assumptions
before class loading makes them invalid

©2012 Azul Systems, Inc.
Inlining works without “final”
public class Thing {
private int x;
public int getX() { return x };
}
...
myX = thing.getX();

Is the same as
Class Thing {
int x;
}
...
myX = thing.x;

As long as there is only one implementer of getX()
©2012 Azul Systems, Inc.
Speculative stuff
The power of the “uncommon trap”
Being able throw away wrong code is very useful
Speculatively assuming callee type
polymorphic can be “monomorphic” or “megamorphic”
Can make virtual calls static even without CHA
Can speculatively inline things without CHA
Speculatively assuming branch behavior
We’ve only ever seen this thing go one way, so....
©2012 Azul Systems, Inc.
Adaptive compilation make
cleaner code practical
Reduces need to trade off clean design against speed

E.g. “final” should be used on methods only when you
want to prohibit extension, overriding. Has no effect on
speed.
E.g. branching can be written “naturally”

©2012 Azul Systems, Inc.
Interesting side effects
Adaptive compilation is...
adaptive
Measuring actual behavior is harder
Micro-benchmarking is an art
JITs are moving target
“Warmup” techniques can often fail

©2012 Azul Systems, Inc.
Warmup problems
Common Example:
Trading system wants to have the first trade be fast
So run 20,000 “fake” messages through the system to warm up
let JIT compilers optimize code, and deopt before actual trades

What really happens
Code is written to do different things “if this is a fake message”
e.g. “Don’t send to the exchange if this is a fake message”
JITs optimize for fake path, including speculatively assuming “fake”
First real message through deopts...
©2012 Azul Systems, Inc.
Warmup tips...
(to get “first real thing” to be fast)
System should not distinguish between “real” and “fake”
Make that an external concern
Avoid all conditional code based on “fake”
Avoid all class-specific calls based on “fake” (“object oriented ifs”).

Use “real” input sources and output targets instead
Have system output to a sink target that knows it is “fake”
Make output decisions based on data, not code
E.g. array of sink targets, with array index computed from message
payload

©2012 Azul Systems, Inc.
Ordering stuff
Ordering of operations
Within a thread, it’s trivial : “happens before”
Across threads?
News flash: CPU memory ordering doesn’t matter
There is a much bigger culprit at play: Compilers
The code will be reordered before your cpu ever sees it
The only ordering rules that matter are the JMM ones.

Intuitive read:
“Happens before holds within threads”
Things can move into, but not out of synchronized blocks
Volatile stuff is a bit more tricky...
©2012 Azul Systems, Inc.
Ordering of operations

Source: http:/
/g.oswego.edu/dl/jmm/cookbook.html
©2012 Azul Systems, Inc.
Garbage Collection Stuff
Most of what People seem to “know”
about Garbage Collection is wrong
In many cases, it’s much better than you may think
GC is extremely efficient. Much more so that malloc()
Dead objects cost nothing to collect
GC will find all the dead objects (including cyclic graphs)
...

In many cases, it’s much worse than you may think
Yes, it really does stop for ~1 sec per live GB (in most JVMs).
No, GC does not mean you can’t have memory leaks
No, those pauses you eliminated from your 20 minute test are
not gone
...
©2012 Azul Systems, Inc.
Generational Collection
Weak Generational Hypothesis; “most objects die young”
Focus collection efforts on young generation:
Use a moving collector: work is linear to the live set
The live set in the young generation is a small % of the space
Promote objects that live long enough to older generations

Only collect older generations as they fill up
“Generational filter” reduces rate of allocation into older generations

Tends to be (order of magnitude) more efficient
Great way to keep up with high allocation rate
Practical necessity for keeping up with processor throughput
©2012 Azul Systems, Inc.
GC Efficiency:
Empty memory vs. CPU

©2012 Azul Systems, Inc.
CPU%
Heap size vs.
GC CPU %

100%

Live set

Heap size
Two Intuitive limits

If we had exactly 1 byte of empty memory at all
times, the collector would have to work “very hard”,
and GC would take 100% of the CPU time
If we had infinite empty memory, we would never have
to collect, and GC would take 0% of the CPU time
GC CPU % will follow a rough 1/x curve between these
two limit points, dropping as the amount of memory
increases.

©2012 Azul Systems, Inc.
Empty memory needs
(empty memory == CPU power)
The amount of empty memory in the heap is the
dominant factor controlling the amount of GC work
For both Copy and Mark/Compact collectors, the
amount of work per cycle is linear to live set
The amount of memory recovered per cycle is equal to
the amount of unused memory (heap size) - (live set)
The collector has to perform a GC cycle when the
empty memory runs out
A Copy or Mark/Compact collector’s efficiency doubles
with every doubling of the empty memory
©2012 Azul Systems, Inc.
What empty memory controls
Empty memory controls efficiency (amount of collector
work needed per amount of application work
performed)
Empty memory controls the frequency of pauses (if
the collector performs any Stop-the-world operations)
Empty memory DOES NOT control pause times (only
their frequency)
In Mark/Sweep/Compact collectors that pause for
sweeping, more empty memory means less frequent but
LARGER pauses
©2012 Azul Systems, Inc.
GC and latency:
That pesky stop-the-world thing

©2012 Azul Systems, Inc.
Delaying the inevitable
Some form of copying/compaction is inevitable in practice
And compacting anything requires scanning/fixing all references to it

Delay tactics focus on getting “easy empty space” first
This is the focus for the vast majority of GC tuning

Most objects die young [Generational]
So collect young objects only, as much as possible. Hope for short STW.
But eventually, some old dead objects must be reclaimed

Most old dead space can be reclaimed without moving it
[e.g. CMS] track dead space in lists, and reuse it in place
But eventually, space gets fragmented, and needs to be moved

Much of the heap is not “popular” [e.g. G1, “Balanced”]
A non popular region will only be pointed to from a small % of the heap
So compact non-popular regions in short stop-the-world pauses
But eventually, popular objects and regions need to be compacted
©2012 Azul Systems, Inc.
Memory use
How many of you use heap sizes of:
F

more than ½ GB?

F

more than 1 GB?

F

more than 2 GB?

F

more than 4 GB?

F

more than 10 GB?

F

more than 20 GB?

F

more than 50 GB?

©2012 Azul Systems, Inc.
Reality check: servers in 2012
Retail prices, major web server store (US $, Oct 2012)
24 vCore, 128GB server

≈ $5K

24 vCore, 256GB server

≈ $8K

32 vCore, 384GB server

≈ $14K

48 vCore, 512GB server

≈ $19K

64 vCore, 1TB server

≈ $36K

Cheap (< $1/GB/Month), and roughly linear to ~1TB
10s to 100s of GB/sec of memory bandwidth
The Application Memory Wall
A simple observation:
Application instances appear to be unable to
make effective use of modern server memory
capacities

The size of application instances as a % of a
server’s capacity is rapidly dropping

©2012 Azul Systems, Inc.
How much memory do applications need?
“640KB ought to be enough for anybody”
WRONG!
So what’s the right number?
6,400K?
64,000K?
640,000K?
6,400,000K?
64,000,000K?
There is no right number
Target moves at 50x-100x per decade
©2012 Azul Systems, Inc.	

	

	

	

	

	

“I've said some stupid things and
some wrong things, but not that.
No one involved in computers
would ever say that a certain
amount of memory is enough for
all time …” - Bill Gates, 1996
“Tiny” application history
Assuming Moore’s Law means:
2010

“transistor counts grow at ≈2x
every ≈18 months”
It also means memory size grows
≈100x every 10 years

1990

1980

??? GB apps on 256 GB
Application
Memory Wall

2000

1GB apps on a 2 – 4 GB server

10MB apps on a 32 – 64 MB server

100KB apps on a ¼ to ½ MB Server
“Tiny”: would be “silly” to distribute

©2012 Azul Systems, Inc.
What is causing the
Application Memory Wall?
Garbage Collection is a clear and dominant cause
There seem to be practical heap size limits for
applications with responsiveness requirements
[Virtually] All current commercial JVMs will exhibit a
multi-second pause on a normally utilized 2-6GB heap.
It’s a question of “When” and “How often”, not “If”.
GC tuning only moves the “when” and the “how often” around

Root cause: The link between scale and responsiveness
©2012 Azul Systems, Inc.
The problems that need solving
(areas where the state of the art needs improvement)

Robust Concurrent Marking
In the presence of high mutation and allocation rates
Cover modern runtime semantics (e.g. weak refs, lock deflation)

Compaction that is not monolithic-stop-the-world
E.g. stay responsive while compacting ¼ TB heaps
Must be robust: not just a tactic to delay STW compaction
[current “incremental STW” attempts fall short on robustness]

Young-Gen that is not monolithic-stop-the-world
Stay responsive while promoting multi-GB data spikes
Concurrent or “incremental STW” may both be ok
Surprisingly little work done in this specific area
©2012 Azul Systems, Inc.
jHiccup

©2012 Azul Systems, Inc.
Incontinuities in Java platform execution
Hiccups"by"Time"Interval"

Max"per"Interval"

99%"

99.90%"

99.99%"

Max"

Hiccup&Dura*on&(msec)&

1800"
1600"
1400"
1200"
1000"
800"
600"
400"
200"
0"
0"

200"

400"

600"

800"

1000"

1200"

1400"

1600"

1800"

&Elapsed&Time&(sec)&

Hiccups"by"Percen@le"Distribu@on"

1800"

Hiccup&Dura*on&(msec)&

1600"

Max=1665.024&

1400"
1200"
1000"
800"
600"
400"
200"
0"

©2012 Azul Systems, Inc.	

	

	

0%"

	

90%"

	

99%"

	

&
99.9%"

&
Percen*le&

99.99%"

99.999%"
jHiccup
A tool for capturing and displaying platform hiccups
Records any observed non-continuity of the underlying platform
Plots results in simple, consistent format

Simple, non-intrusive
As simple as adding the word “jHiccup” to your java launch line
% jHiccup java myflags myApp
(Or use as a java agent)
Adds a background thread that samples time @ 1000/sec

Open Source
Released to the public domain, creative commons CC0
©2012 Azul Systems, Inc.
Telco App Example
Max Time per

Hiccups&by&Time&Interval&

interval

Max"per"Interval"

99%"

99.90%"

99.99%"

Max"

Hiccup&Dura*on&(msec)&

140"
120"
100"
80"
60"
40"
20"
0"
0"

500"

1000"

1500"

2000"

2500"

&Elapsed&Time&(sec)&

Optional SLA

Hiccups&by&Percen*le&Distribu*on&

plotting
Hiccup&Dura*on&(msec)&

120"

duration at

100"
80"

percentile

60"

levels

40"
20"
0"

0%"

90%"

99%"

&
99.9%"

&
Percen*le&

Hiccups"by"Percen?le"

©2012 Azul Systems, Inc.	

Hiccup

140"

	

	

	

	

	

99.99%"

SLA"

99.999%"

99.9999%"
Examples

©2012 Azul Systems, Inc.
Idle App on Quiet System

Idle App on Busy System

Hiccups&by&Time&Interval&
Max"per"Interval"

99%"

99.90%"

Hiccups&by&Time&Interval&

99.99%"

Max"

Max"per"Interval"

20"
15"
10"
5"
0"

Max"

50"
40"
30"
20"
10"

100"

200"

300"

400"

500"

600"

700"

800"

900"

0"

100"

200"

&Elapsed&Time&(sec)&

300"

400"

500"

600"

700"

800"

900"

&Elapsed&Time&(sec)&

Hiccups&by&Percen*le&Distribu*on&

Hiccups&by&Percen*le&Distribu*on&

25"

60"

Max=22.336&

20"

Hiccup&Dura*on&(msec)&

Hiccup&Dura*on&(msec)&

99.99%"

0"
0"

15"
10"
5"
0"

99.90%"

60"

Hiccup&Dura*on&(msec)&

Hiccup&Dura*on&(msec)&

25"

99%"

0%"

©2012 Azul Systems, Inc.	

90%"

99%"

	

	

&
99.9%"

99.99%"

99.999%"

	

	

	

&
Percen*le&

50"

Max=49.728&
40"
30"
20"
10"
0"

0%"

90%"

99%"

&
99.9%"

&
Percen*le&

99.99%"

99.999%"
Idle App on Quiet System

Idle App on Dedicated System

Hiccups&by&Time&Interval&
Max"per"Interval"

99%"

99.90%"

Hiccups&by&Time&Interval&

99.99%"

Max"

Max"per"Interval"

20"
15"
10"
5"
0"

99.99%"

Max"

0.4"
0.35"
0.3"
0.25"
0.2"
0.15"
0.1"
0.05"
0"

0"

100"

200"

300"

400"

500"

600"

700"

800"

900"

0"

100"

200"

&Elapsed&Time&(sec)&

300"

400"

500"

600"

700"

800"

900"

&Elapsed&Time&(sec)&

Hiccups&by&Percen*le&Distribu*on&

Hiccups&by&Percen*le&Distribu*on&

25"

0.45"

Max=22.336&

20"

Hiccup&Dura*on&(msec)&

Hiccup&Dura*on&(msec)&

99.90%"

0.45"

Hiccup&Dura*on&(msec)&

Hiccup&Dura*on&(msec)&

25"

99%"

15"
10"
5"

0.4"

Max=0.411&

0.35"
0.3"
0.25"
0.2"
0.15"
0.1"
0.05"

0"

0%"

©2012 Azul Systems, Inc.	

90%"

99%"

	

	

&
99.9%"

99.99%"

99.999%"

	

	

	

&
Percen*le&

0"

0%"

90%"

99%"

&
99.9%"

&
Percen*le&

99.99%"

99.999%"
EHCache: 1GB data set under load
Hiccups"by"Time"Interval"

Max"per"Interval"

99%"

99.90%"

99.99%"

Max"

Hiccup&Dura*on&(msec)&

4000"
3500"
3000"
2500"
2000"
1500"
1000"
500"
0"
0"

500"

1000"

1500"

2000"

&Elapsed&Time&(sec)&

Hiccups"by"Percen@le"Distribu@on"

Hiccup&Dura*on&(msec)&

4000"
3500"

Max=3448.832&

3000"
2500"
2000"
1500"
1000"
500"
0"

©2012 Azul Systems, Inc.	

	

	

0%"

	

90%"

	

99%"

	

&
99.9%"

&
Percen*le&

99.99%"

99.999%"

99.9999%"
Fun with jHiccup

©2012 Azul Systems, Inc.
Oracle HotSpot CMS, 1GB in an 8GB heap

Zing 5, 1GB in an 8GB heap

Hiccups&by&Time&Interval&
Max"per"Interval"

99%"

99.90%"

Hiccups&by&Time&Interval&

99.99%"

Max"

Max"per"Interval"

12000"
10000"
8000"
6000"
4000"
2000"
0"

2000"

2500"

Max"

20"
15"
10"
5"

500"

1000"

1500"

2000"

2500"

3000"

3500"

0"

500"

1000"

&Elapsed&Time&(sec)&

1500"

3000"

3500"

&Elapsed&Time&(sec)&

Hiccups&by&Percen*le&Distribu*on&

Hiccups&by&Percen*le&Distribu*on&

14000"

25"

Max=13156.352&

12000"

Hiccup&Dura*on&(msec)&

Hiccup&Dura*on&(msec)&

99.99%"

0"
0"

10000"
8000"
6000"
4000"
2000"
0"

99.90%"

25"

Hiccup&Dura*on&(msec)&

Hiccup&Dura*on&(msec)&

14000"

99%"

0%"

©2012 Azul Systems, Inc.	

90%"

	

99%"

	

&99.9%"
&
Percen*le&

	

99.99%"

	

99.999%"

	

20"

Max=20.384&

15"
10"
5"
0"

0%"

90%"

99%"

&
99.9%"

&
Percen*le&

99.99%"

99.999%"

99.9999%"
Oracle HotSpot CMS, 1GB in an 8GB heap

Zing 5, 1GB in an 8GB heap

Hiccups&by&Time&Interval&
Max"per"Interval"

99%"

99.90%"

Hiccups&by&Time&Interval&

99.99%"

Max"

Max"per"Interval"

12000"

99.99%"

Max"

12000"

10000"
8000"
6000"
4000"
2000"
0"

10000"
8000"
6000"
4000"
2000"
0"

0"

500"

1000"

1500"

2000"

2500"

3000"

3500"

0"

500"

1000"

&Elapsed&Time&(sec)&

2000"

2500"

3000"

3500"

Hiccups&by&Percen*le&Distribu*on&

14000"

14000"

Hiccup&Dura*on&(msec)&

Max=13156.352&

12000"
10000"
8000"
6000"
4000"
2000"
0"

1500"

&Elapsed&Time&(sec)&

Hiccups&by&Percen*le&Distribu*on&

Hiccup&Dura*on&(msec)&

99.90%"

14000"

Hiccup&Dura*on&(msec)&

Hiccup&Dura*on&(msec)&

14000"

99%"

0%"

90%"

99%"

&99.9%"
&
Percen*le&

99.99%"

99.999%"

12000"
10000"
8000"
6000"
4000"
2000"
0"

0%"

90%"
Max=20.384&

Drawn to scale
©2012 Azul Systems, Inc.	

	

	

	

	

	

99%"

&
99.9%"

99.99%"

&
Percen*le&

99.999%"

99.9999%"
What you can expect (from Zing)
in the low latency world
Assuming individual transaction work is “short” (on
the order of 1 msec), and assuming you don’t have
100s of runnable threads competing for 10 cores...
“Easily” get your application to < 10 msec worst case
With some tuning, 2-3 msec worst case
Can go to below 1 msec worst case...
May require heavy tuning/tweaking
Mileage WILL vary
©2012 Azul Systems, Inc.
Oracle HotSpot (pure newgen)

Zing

Hiccups&by&Time&Interval&
Max"per"Interval"

99%"

99.90%"

Hiccups&by&Time&Interval&

99.99%"

Max"

Max"per"Interval"

20"
15"
10"
5"
0"

99.99%"

Max"

1.6"
1.4"
1.2"
1"
0.8"
0.6"
0.4"
0.2"
0"

0"

100"

200"

300"

400"

500"

600"

0"

100"

200"

&Elapsed&Time&(sec)&

300"

400"

500"

600"

&Elapsed&Time&(sec)&

Hiccups&by&Percen*le&Distribu*on&

Hiccups&by&Percen*le&Distribu*on&

25"

1.8"

Max=22.656&

20"

Hiccup&Dura*on&(msec)&

Hiccup&Dura*on&(msec)&

99.90%"

1.8"

Hiccup&Dura*on&(msec)&

Hiccup&Dura*on&(msec)&

25"

99%"

15"
10"
5"

1.6"

Max=1.568&

1.4"
1.2"
1"
0.8"
0.6"
0.4"
0.2"

0"

0%"

90%"

&

99%"

99.9%"

&
99.99%"

99.999%"

Percen*le&

0"

0%"

90%"

&

99%"

Low latency trading application
©2012 Azul Systems, Inc.	

	

	

	

	

	

99.9%"

&
99.99%"
Percen*le&

99.999%"
Oracle HotSpot (pure newgen)

Zing

Hiccups&by&Time&Interval&
Max"per"Interval"

99%"

99.90%"

Hiccups&by&Time&Interval&

99.99%"

Max"

Max"per"Interval"

20"
15"
10"
5"
0"

Max"

20"
15"
10"
5"

100"

200"

300"

400"

500"

600"

0"

100"

200"

&Elapsed&Time&(sec)&

300"

400"

500"

600"

&Elapsed&Time&(sec)&

Hiccups&by&Percen*le&Distribu*on&

Hiccups&by&Percen*le&Distribu*on&

25"

25"

Max=22.656&

20"

Hiccup&Dura*on&(msec)&

Hiccup&Dura*on&(msec)&

99.99%"

0"
0"

15"
10"
5"
0"

99.90%"

25"

Hiccup&Dura*on&(msec)&

Hiccup&Dura*on&(msec)&

25"

99%"

0%"

90%"

&

99%"

99.9%"

&
99.99%"

99.999%"

Percen*le&

20"
15"
10"
5"
0"

Max=1.568&
0%"

90%"

&

99%"

Low latency - Drawn to scale
©2012 Azul Systems, Inc.	

	

	

	

	

	

99.9%"

&
99.99%"
Percen*le&

99.999%"
Open
Discussion
http:/
/www.azulsystems.com
http:/
/www.jhiccup.com

©2012 Azul Systems, Inc.

Más contenido relacionado

Destacado

Priming Java for Speed at Market Open
Priming Java for Speed at Market OpenPriming Java for Speed at Market Open
Priming Java for Speed at Market OpenAzul Systems Inc.
 
Start Fast and Stay Fast - Priming Java for Market Open with ReadyNow!
Start Fast and Stay Fast - Priming Java for Market Open with ReadyNow!Start Fast and Stay Fast - Priming Java for Market Open with ReadyNow!
Start Fast and Stay Fast - Priming Java for Market Open with ReadyNow!Azul Systems Inc.
 
Enabling Java in Latency Sensitive Environments
Enabling Java in Latency Sensitive EnvironmentsEnabling Java in Latency Sensitive Environments
Enabling Java in Latency Sensitive EnvironmentsC4Media
 
The Java Evolution Mismatch - Why You Need a Better JVM
The Java Evolution Mismatch - Why You Need a Better JVMThe Java Evolution Mismatch - Why You Need a Better JVM
The Java Evolution Mismatch - Why You Need a Better JVMAzul Systems Inc.
 
QCon London: Low latency Java in the real world - LMAX Exchange and the Zing JVM
QCon London: Low latency Java in the real world - LMAX Exchange and the Zing JVMQCon London: Low latency Java in the real world - LMAX Exchange and the Zing JVM
QCon London: Low latency Java in the real world - LMAX Exchange and the Zing JVMAzul Systems, Inc.
 
Dissecting the Hotspot JVM
Dissecting the Hotspot JVMDissecting the Hotspot JVM
Dissecting the Hotspot JVMIvan Ivanov
 
Intrinsic Methods in HotSpot VM
Intrinsic Methods in HotSpot VMIntrinsic Methods in HotSpot VM
Intrinsic Methods in HotSpot VMKris Mok
 
Øredev 2011 - JVM JIT for Dummies (What the JVM Does With Your Bytecode When ...
Øredev 2011 - JVM JIT for Dummies (What the JVM Does With Your Bytecode When ...Øredev 2011 - JVM JIT for Dummies (What the JVM Does With Your Bytecode When ...
Øredev 2011 - JVM JIT for Dummies (What the JVM Does With Your Bytecode When ...Charles Nutter
 
JVM Mechanics: When Does the JVM JIT & Deoptimize?
JVM Mechanics: When Does the JVM JIT & Deoptimize?JVM Mechanics: When Does the JVM JIT & Deoptimize?
JVM Mechanics: When Does the JVM JIT & Deoptimize?Doug Hawkins
 

Destacado (9)

Priming Java for Speed at Market Open
Priming Java for Speed at Market OpenPriming Java for Speed at Market Open
Priming Java for Speed at Market Open
 
Start Fast and Stay Fast - Priming Java for Market Open with ReadyNow!
Start Fast and Stay Fast - Priming Java for Market Open with ReadyNow!Start Fast and Stay Fast - Priming Java for Market Open with ReadyNow!
Start Fast and Stay Fast - Priming Java for Market Open with ReadyNow!
 
Enabling Java in Latency Sensitive Environments
Enabling Java in Latency Sensitive EnvironmentsEnabling Java in Latency Sensitive Environments
Enabling Java in Latency Sensitive Environments
 
The Java Evolution Mismatch - Why You Need a Better JVM
The Java Evolution Mismatch - Why You Need a Better JVMThe Java Evolution Mismatch - Why You Need a Better JVM
The Java Evolution Mismatch - Why You Need a Better JVM
 
QCon London: Low latency Java in the real world - LMAX Exchange and the Zing JVM
QCon London: Low latency Java in the real world - LMAX Exchange and the Zing JVMQCon London: Low latency Java in the real world - LMAX Exchange and the Zing JVM
QCon London: Low latency Java in the real world - LMAX Exchange and the Zing JVM
 
Dissecting the Hotspot JVM
Dissecting the Hotspot JVMDissecting the Hotspot JVM
Dissecting the Hotspot JVM
 
Intrinsic Methods in HotSpot VM
Intrinsic Methods in HotSpot VMIntrinsic Methods in HotSpot VM
Intrinsic Methods in HotSpot VM
 
Øredev 2011 - JVM JIT for Dummies (What the JVM Does With Your Bytecode When ...
Øredev 2011 - JVM JIT for Dummies (What the JVM Does With Your Bytecode When ...Øredev 2011 - JVM JIT for Dummies (What the JVM Does With Your Bytecode When ...
Øredev 2011 - JVM JIT for Dummies (What the JVM Does With Your Bytecode When ...
 
JVM Mechanics: When Does the JVM JIT & Deoptimize?
JVM Mechanics: When Does the JVM JIT & Deoptimize?JVM Mechanics: When Does the JVM JIT & Deoptimize?
JVM Mechanics: When Does the JVM JIT & Deoptimize?
 

Similar a JVM Mechanics: A Peek Under the Hood

Enabling Java in Latency Sensitive Applications by Gil Tene, CTO, Azul Systems
Enabling Java in Latency Sensitive Applications by Gil Tene, CTO, Azul SystemsEnabling Java in Latency Sensitive Applications by Gil Tene, CTO, Azul Systems
Enabling Java in Latency Sensitive Applications by Gil Tene, CTO, Azul SystemszuluJDK
 
Enabling Java in Latency-Sensitive Applications
Enabling Java in Latency-Sensitive ApplicationsEnabling Java in Latency-Sensitive Applications
Enabling Java in Latency-Sensitive ApplicationsAzul Systems Inc.
 
Understanding Java Garbage Collection - And What You Can Do About It
Understanding Java Garbage Collection - And What You Can Do About ItUnderstanding Java Garbage Collection - And What You Can Do About It
Understanding Java Garbage Collection - And What You Can Do About ItAzul Systems Inc.
 
Winning With Java at Market Open
Winning With Java at Market OpenWinning With Java at Market Open
Winning With Java at Market OpenAzul Systems, Inc.
 
[REPEAT] Deep Learning for Developers: An Introduction, Featuring Samsung SDS...
[REPEAT] Deep Learning for Developers: An Introduction, Featuring Samsung SDS...[REPEAT] Deep Learning for Developers: An Introduction, Featuring Samsung SDS...
[REPEAT] Deep Learning for Developers: An Introduction, Featuring Samsung SDS...Amazon Web Services
 
Hexadite Real Life Django ORM
Hexadite Real Life Django ORMHexadite Real Life Django ORM
Hexadite Real Life Django ORMMaxim Braitmaiere
 
Understanding Application Hiccups - and What You Can Do About Them
Understanding Application Hiccups - and What You Can Do About ThemUnderstanding Application Hiccups - and What You Can Do About Them
Understanding Application Hiccups - and What You Can Do About ThemAzul Systems Inc.
 
How to write clean & testable code without losing your mind
How to write clean & testable code without losing your mindHow to write clean & testable code without losing your mind
How to write clean & testable code without losing your mindAndreas Czakaj
 
Deep Learning for Developers: An Introduction, Featuring Samsung SDS (AIM301-...
Deep Learning for Developers: An Introduction, Featuring Samsung SDS (AIM301-...Deep Learning for Developers: An Introduction, Featuring Samsung SDS (AIM301-...
Deep Learning for Developers: An Introduction, Featuring Samsung SDS (AIM301-...Amazon Web Services
 
Implementing the Genetic Algorithm in XSLT: PoC
Implementing the Genetic Algorithm in XSLT: PoCImplementing the Genetic Algorithm in XSLT: PoC
Implementing the Genetic Algorithm in XSLT: PoCjimfuller2009
 
JetBrains Day Seoul - Exploring .NET’s memory management – a trip down memory...
JetBrains Day Seoul - Exploring .NET’s memory management – a trip down memory...JetBrains Day Seoul - Exploring .NET’s memory management – a trip down memory...
JetBrains Day Seoul - Exploring .NET’s memory management – a trip down memory...Maarten Balliauw
 
Next Generation Indexes For Big Data Engineering (ODSC East 2018)
Next Generation Indexes For Big Data Engineering (ODSC East 2018)Next Generation Indexes For Big Data Engineering (ODSC East 2018)
Next Generation Indexes For Big Data Engineering (ODSC East 2018)Daniel Lemire
 
GWT is Smarter Than You
GWT is Smarter Than YouGWT is Smarter Than You
GWT is Smarter Than YouRobert Cooper
 
Understanding Hardware Transactional Memory
Understanding Hardware Transactional MemoryUnderstanding Hardware Transactional Memory
Understanding Hardware Transactional MemoryC4Media
 
C++ Data-flow Parallelism sounds great! But how practical is it? Let’s see ho...
C++ Data-flow Parallelism sounds great! But how practical is it? Let’s see ho...C++ Data-flow Parallelism sounds great! But how practical is it? Let’s see ho...
C++ Data-flow Parallelism sounds great! But how practical is it? Let’s see ho...Jason Hearne-McGuiness
 
Skiron - Experiments in CPU Design in D
Skiron - Experiments in CPU Design in DSkiron - Experiments in CPU Design in D
Skiron - Experiments in CPU Design in DMithun Hunsur
 
What's new with JavaScript in GNOME: The 2020 edition (GUADEC 2020)
What's new with JavaScript in GNOME: The 2020 edition (GUADEC 2020)What's new with JavaScript in GNOME: The 2020 edition (GUADEC 2020)
What's new with JavaScript in GNOME: The 2020 edition (GUADEC 2020)Igalia
 
C5 c++ development environment
C5 c++ development environmentC5 c++ development environment
C5 c++ development environmentsnchnchl
 
Real time stream processing presentation at General Assemb.ly
Real time stream processing presentation at General Assemb.lyReal time stream processing presentation at General Assemb.ly
Real time stream processing presentation at General Assemb.lyVarun Vijayaraghavan
 

Similar a JVM Mechanics: A Peek Under the Hood (20)

Enabling Java in Latency Sensitive Applications by Gil Tene, CTO, Azul Systems
Enabling Java in Latency Sensitive Applications by Gil Tene, CTO, Azul SystemsEnabling Java in Latency Sensitive Applications by Gil Tene, CTO, Azul Systems
Enabling Java in Latency Sensitive Applications by Gil Tene, CTO, Azul Systems
 
Enabling Java in Latency-Sensitive Applications
Enabling Java in Latency-Sensitive ApplicationsEnabling Java in Latency-Sensitive Applications
Enabling Java in Latency-Sensitive Applications
 
Understanding Java Garbage Collection - And What You Can Do About It
Understanding Java Garbage Collection - And What You Can Do About ItUnderstanding Java Garbage Collection - And What You Can Do About It
Understanding Java Garbage Collection - And What You Can Do About It
 
Winning With Java at Market Open
Winning With Java at Market OpenWinning With Java at Market Open
Winning With Java at Market Open
 
[REPEAT] Deep Learning for Developers: An Introduction, Featuring Samsung SDS...
[REPEAT] Deep Learning for Developers: An Introduction, Featuring Samsung SDS...[REPEAT] Deep Learning for Developers: An Introduction, Featuring Samsung SDS...
[REPEAT] Deep Learning for Developers: An Introduction, Featuring Samsung SDS...
 
Hexadite Real Life Django ORM
Hexadite Real Life Django ORMHexadite Real Life Django ORM
Hexadite Real Life Django ORM
 
Understanding Application Hiccups - and What You Can Do About Them
Understanding Application Hiccups - and What You Can Do About ThemUnderstanding Application Hiccups - and What You Can Do About Them
Understanding Application Hiccups - and What You Can Do About Them
 
How to write clean & testable code without losing your mind
How to write clean & testable code without losing your mindHow to write clean & testable code without losing your mind
How to write clean & testable code without losing your mind
 
Deep Learning for Developers: An Introduction, Featuring Samsung SDS (AIM301-...
Deep Learning for Developers: An Introduction, Featuring Samsung SDS (AIM301-...Deep Learning for Developers: An Introduction, Featuring Samsung SDS (AIM301-...
Deep Learning for Developers: An Introduction, Featuring Samsung SDS (AIM301-...
 
C++ How to program
C++ How to programC++ How to program
C++ How to program
 
Implementing the Genetic Algorithm in XSLT: PoC
Implementing the Genetic Algorithm in XSLT: PoCImplementing the Genetic Algorithm in XSLT: PoC
Implementing the Genetic Algorithm in XSLT: PoC
 
JetBrains Day Seoul - Exploring .NET’s memory management – a trip down memory...
JetBrains Day Seoul - Exploring .NET’s memory management – a trip down memory...JetBrains Day Seoul - Exploring .NET’s memory management – a trip down memory...
JetBrains Day Seoul - Exploring .NET’s memory management – a trip down memory...
 
Next Generation Indexes For Big Data Engineering (ODSC East 2018)
Next Generation Indexes For Big Data Engineering (ODSC East 2018)Next Generation Indexes For Big Data Engineering (ODSC East 2018)
Next Generation Indexes For Big Data Engineering (ODSC East 2018)
 
GWT is Smarter Than You
GWT is Smarter Than YouGWT is Smarter Than You
GWT is Smarter Than You
 
Understanding Hardware Transactional Memory
Understanding Hardware Transactional MemoryUnderstanding Hardware Transactional Memory
Understanding Hardware Transactional Memory
 
C++ Data-flow Parallelism sounds great! But how practical is it? Let’s see ho...
C++ Data-flow Parallelism sounds great! But how practical is it? Let’s see ho...C++ Data-flow Parallelism sounds great! But how practical is it? Let’s see ho...
C++ Data-flow Parallelism sounds great! But how practical is it? Let’s see ho...
 
Skiron - Experiments in CPU Design in D
Skiron - Experiments in CPU Design in DSkiron - Experiments in CPU Design in D
Skiron - Experiments in CPU Design in D
 
What's new with JavaScript in GNOME: The 2020 edition (GUADEC 2020)
What's new with JavaScript in GNOME: The 2020 edition (GUADEC 2020)What's new with JavaScript in GNOME: The 2020 edition (GUADEC 2020)
What's new with JavaScript in GNOME: The 2020 edition (GUADEC 2020)
 
C5 c++ development environment
C5 c++ development environmentC5 c++ development environment
C5 c++ development environment
 
Real time stream processing presentation at General Assemb.ly
Real time stream processing presentation at General Assemb.lyReal time stream processing presentation at General Assemb.ly
Real time stream processing presentation at General Assemb.ly
 

Más de Azul Systems Inc.

Advancements ingc andc4overview_linkedin_oct2017
Advancements ingc andc4overview_linkedin_oct2017Advancements ingc andc4overview_linkedin_oct2017
Advancements ingc andc4overview_linkedin_oct2017Azul Systems Inc.
 
Zulu Embedded Java Introduction
Zulu Embedded Java IntroductionZulu Embedded Java Introduction
Zulu Embedded Java IntroductionAzul Systems Inc.
 
What's New in the JVM in Java 8?
What's New in the JVM in Java 8?What's New in the JVM in Java 8?
What's New in the JVM in Java 8?Azul Systems Inc.
 
DotCMS Bootcamp: Enabling Java in Latency Sensitivie Environments
DotCMS Bootcamp: Enabling Java in Latency Sensitivie EnvironmentsDotCMS Bootcamp: Enabling Java in Latency Sensitivie Environments
DotCMS Bootcamp: Enabling Java in Latency Sensitivie EnvironmentsAzul Systems Inc.
 
ObjectLayout: Closing the (last?) inherent C vs. Java speed gap
ObjectLayout: Closing the (last?) inherent C vs. Java speed gapObjectLayout: Closing the (last?) inherent C vs. Java speed gap
ObjectLayout: Closing the (last?) inherent C vs. Java speed gapAzul Systems Inc.
 
Azul Systems open source guide
Azul Systems open source guideAzul Systems open source guide
Azul Systems open source guideAzul Systems Inc.
 
Intelligent Trading Summit NY 2014: Understanding Latency: Key Lessons and Tools
Intelligent Trading Summit NY 2014: Understanding Latency: Key Lessons and ToolsIntelligent Trading Summit NY 2014: Understanding Latency: Key Lessons and Tools
Intelligent Trading Summit NY 2014: Understanding Latency: Key Lessons and ToolsAzul Systems Inc.
 
Understanding Java Garbage Collection
Understanding Java Garbage CollectionUnderstanding Java Garbage Collection
Understanding Java Garbage CollectionAzul Systems Inc.
 
The evolution of OpenJDK: From Java's beginnings to 2014
The evolution of OpenJDK: From Java's beginnings to 2014The evolution of OpenJDK: From Java's beginnings to 2014
The evolution of OpenJDK: From Java's beginnings to 2014Azul Systems Inc.
 
Push Technology's latest data distribution benchmark with Solarflare and Zing
Push Technology's latest data distribution benchmark with Solarflare and ZingPush Technology's latest data distribution benchmark with Solarflare and Zing
Push Technology's latest data distribution benchmark with Solarflare and ZingAzul Systems Inc.
 
Webinar: Zing Vision: Answering your toughest production Java performance que...
Webinar: Zing Vision: Answering your toughest production Java performance que...Webinar: Zing Vision: Answering your toughest production Java performance que...
Webinar: Zing Vision: Answering your toughest production Java performance que...Azul Systems Inc.
 
Speculative Locking: Breaking the Scale Barrier (JAOO 2005)
Speculative Locking: Breaking the Scale Barrier (JAOO 2005)Speculative Locking: Breaking the Scale Barrier (JAOO 2005)
Speculative Locking: Breaking the Scale Barrier (JAOO 2005)Azul Systems Inc.
 
Towards a Scalable Non-Blocking Coding Style
Towards a Scalable Non-Blocking Coding StyleTowards a Scalable Non-Blocking Coding Style
Towards a Scalable Non-Blocking Coding StyleAzul Systems Inc.
 
Experiences with Debugging Data Races
Experiences with Debugging Data RacesExperiences with Debugging Data Races
Experiences with Debugging Data RacesAzul Systems Inc.
 
Lock-Free, Wait-Free Hash Table
Lock-Free, Wait-Free Hash TableLock-Free, Wait-Free Hash Table
Lock-Free, Wait-Free Hash TableAzul Systems Inc.
 
How NOT to Write a Microbenchmark
How NOT to Write a MicrobenchmarkHow NOT to Write a Microbenchmark
How NOT to Write a MicrobenchmarkAzul Systems Inc.
 
The Art of Java Benchmarking
The Art of Java BenchmarkingThe Art of Java Benchmarking
The Art of Java BenchmarkingAzul Systems Inc.
 
Azul Zulu on Azure Overview -- OpenTech CEE Workshop, Warsaw, Poland
Azul Zulu on Azure Overview -- OpenTech CEE Workshop, Warsaw, PolandAzul Zulu on Azure Overview -- OpenTech CEE Workshop, Warsaw, Poland
Azul Zulu on Azure Overview -- OpenTech CEE Workshop, Warsaw, PolandAzul Systems Inc.
 

Más de Azul Systems Inc. (20)

Advancements ingc andc4overview_linkedin_oct2017
Advancements ingc andc4overview_linkedin_oct2017Advancements ingc andc4overview_linkedin_oct2017
Advancements ingc andc4overview_linkedin_oct2017
 
Zulu Embedded Java Introduction
Zulu Embedded Java IntroductionZulu Embedded Java Introduction
Zulu Embedded Java Introduction
 
What's New in the JVM in Java 8?
What's New in the JVM in Java 8?What's New in the JVM in Java 8?
What's New in the JVM in Java 8?
 
DotCMS Bootcamp: Enabling Java in Latency Sensitivie Environments
DotCMS Bootcamp: Enabling Java in Latency Sensitivie EnvironmentsDotCMS Bootcamp: Enabling Java in Latency Sensitivie Environments
DotCMS Bootcamp: Enabling Java in Latency Sensitivie Environments
 
ObjectLayout: Closing the (last?) inherent C vs. Java speed gap
ObjectLayout: Closing the (last?) inherent C vs. Java speed gapObjectLayout: Closing the (last?) inherent C vs. Java speed gap
ObjectLayout: Closing the (last?) inherent C vs. Java speed gap
 
Azul Systems open source guide
Azul Systems open source guideAzul Systems open source guide
Azul Systems open source guide
 
Intelligent Trading Summit NY 2014: Understanding Latency: Key Lessons and Tools
Intelligent Trading Summit NY 2014: Understanding Latency: Key Lessons and ToolsIntelligent Trading Summit NY 2014: Understanding Latency: Key Lessons and Tools
Intelligent Trading Summit NY 2014: Understanding Latency: Key Lessons and Tools
 
Understanding Java Garbage Collection
Understanding Java Garbage CollectionUnderstanding Java Garbage Collection
Understanding Java Garbage Collection
 
The evolution of OpenJDK: From Java's beginnings to 2014
The evolution of OpenJDK: From Java's beginnings to 2014The evolution of OpenJDK: From Java's beginnings to 2014
The evolution of OpenJDK: From Java's beginnings to 2014
 
Push Technology's latest data distribution benchmark with Solarflare and Zing
Push Technology's latest data distribution benchmark with Solarflare and ZingPush Technology's latest data distribution benchmark with Solarflare and Zing
Push Technology's latest data distribution benchmark with Solarflare and Zing
 
Webinar: Zing Vision: Answering your toughest production Java performance que...
Webinar: Zing Vision: Answering your toughest production Java performance que...Webinar: Zing Vision: Answering your toughest production Java performance que...
Webinar: Zing Vision: Answering your toughest production Java performance que...
 
Speculative Locking: Breaking the Scale Barrier (JAOO 2005)
Speculative Locking: Breaking the Scale Barrier (JAOO 2005)Speculative Locking: Breaking the Scale Barrier (JAOO 2005)
Speculative Locking: Breaking the Scale Barrier (JAOO 2005)
 
Java vs. C/C++
Java vs. C/C++Java vs. C/C++
Java vs. C/C++
 
What's Inside a JVM?
What's Inside a JVM?What's Inside a JVM?
What's Inside a JVM?
 
Towards a Scalable Non-Blocking Coding Style
Towards a Scalable Non-Blocking Coding StyleTowards a Scalable Non-Blocking Coding Style
Towards a Scalable Non-Blocking Coding Style
 
Experiences with Debugging Data Races
Experiences with Debugging Data RacesExperiences with Debugging Data Races
Experiences with Debugging Data Races
 
Lock-Free, Wait-Free Hash Table
Lock-Free, Wait-Free Hash TableLock-Free, Wait-Free Hash Table
Lock-Free, Wait-Free Hash Table
 
How NOT to Write a Microbenchmark
How NOT to Write a MicrobenchmarkHow NOT to Write a Microbenchmark
How NOT to Write a Microbenchmark
 
The Art of Java Benchmarking
The Art of Java BenchmarkingThe Art of Java Benchmarking
The Art of Java Benchmarking
 
Azul Zulu on Azure Overview -- OpenTech CEE Workshop, Warsaw, Poland
Azul Zulu on Azure Overview -- OpenTech CEE Workshop, Warsaw, PolandAzul Zulu on Azure Overview -- OpenTech CEE Workshop, Warsaw, Poland
Azul Zulu on Azure Overview -- OpenTech CEE Workshop, Warsaw, Poland
 

Último

Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...panagenda
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfNeo4j
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Scott Andery
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterMydbops
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Hiroshi SHIBATA
 

Último (20)

Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdf
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL Router
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024
 

JVM Mechanics: A Peek Under the Hood

  • 1. JVM Mechanics A peek under the hood Gil Tene, CTO & co-Founder, Azul Systems ©2012 Azul Systems, Inc.
  • 2. About me: Gil Tene co-founder, CTO @Azul Systems Working JVMs since 2001, Managed runtimes since 1989 Created Pauseless & C4 core GC algorithms (Tene, Wolf) A Long history building Virtual & Physical Machines, Operating Systems, Enterprise apps, etc... ©2012 Azul Systems, Inc. * working on real-world trash compaction issues, circa 2004
  • 3. About Azul Vega We make scalable Virtual Machines Have built “whatever it takes to get job done” since 2002 3 generations of custom SMP Multi-core HW (Vega) Now Pure software for commodity x86 (Zing) C4 “Industry firsts” in Garbage collection, elastic memory, Java virtualization, memory scale ©2012 Azul Systems, Inc.
  • 4. High level agenda Compiler stuff Adaptive behavior stuff Ordering stuff Garbage Collection stuff Some chest beating Open discussion ©2012 Azul Systems, Inc.
  • 6. The JIT compilers transforms code The code actually executed can be very different than the code you write
  • 8. Code can be reordered... int doMath(int x, int y, int z) { int a = x + y; int b = x - y; int c = z + x; return a + b; } Can be reordered to: int doMath(int x, int y, int z) { int c = z + x; int b = x - y; int a = x + y; return a + b; } ©2012 Azul Systems, Inc.
  • 9. Dead code can be removed int doMath(int x, int y, int z) { int a = x + y; int b = x - y; int c = z + x; return a + b; } Can be reduced to: int doMath(int x, int y, int z) { int a = x + y; int b = x - y; return a + b; } ©2012 Azul Systems, Inc.
  • 10. Values can be propagated int doMath(int x, int y, int z) { int a = x + y; int b = x - y; int c = z + x; return a + b; } Can be reduced to: int doMath(int x, int y, int z) { return x + y + x - y; } ©2012 Azul Systems, Inc.
  • 11. Math can be simplified int doMath(int x, int y, int z) { int a = x + y; int b = x - y; int c = z + x; return a + b; } Can be reduced to: int doMath(int x, int y, int z) { return x + x; } ©2012 Azul Systems, Inc.
  • 12. So why does this matter Keep your code “readable” largestValueLog = Math.log(largestValueWithSingleUnitResolution); magnitude = (int) Math.ceil(largestValueLog/Math.log(2.0)); subBucketMagnitude = (magnitude > 1) ? magnitude : 1; subBucketCount = (int) Math.pow(2, subBucketMagnitude); subBucketMask = subBucketCount - 1; Hard enough to follow as it is No value in “optimizing” human-readable meaning away Compiled code will end up the same anyway ©2012 Azul Systems, Inc.
  • 14. Reads can be cached int distanceRatio(Object a) { int distanceTo = a.getX() - start; int distanceAfter = end - a.getX(); return distanceTo/distanceAfter; } Is the same as int distanceRatio(Object a) { int x = a.getX(); int distanceTo = x - start; int distanceAfter = end - x; return distanceTo/distanceAfter; } ©2012 Azul Systems, Inc.
  • 15. Reads can be cached void loopUntilFlagSet(Object a) { while (!a.flagIsSet()) { loopcount++; } } Is the same as: void loopUntilFlagSet(Object a) { boolean flagIsSet = a.flagIsSet(); while (!flagIsSet) { loopcount++; } } ©2012 Azul Systems, Inc. That’s what volatile is for...
  • 16. Writes can be eliminated Intermediate values might never be visible void updateDistance(Object a) { int distance = 100; a.setX(distance); a.setX(distance * 2); a.setX(distance * 3); } Is the same as void updateDistance(Object a) { a.setX(300); } ©2012 Azul Systems, Inc.
  • 17. Writes can be eliminated Intermediate values might never be visible void updateDistance(Object a) { a. setVisibleValue(0); for (int i = 0; i < 1000000; i++) { a.setInternalValue(i); } a.setVisibleValue(a.getInternalValue()); } Is the same as void updateDistance(Object a) { a.setInternalValue(1000000); a.setVisibleValue(1000000); } ©2012 Azul Systems, Inc.
  • 18. Inlining... public class Thing { private int x; public final int getX() { return x }; } ... myX = thing.getX(); Is the same as Class Thing { int x; } ... myX = thing.x; ©2012 Azul Systems, Inc.
  • 19. Things JIT compilers can do ..and static compilers can have a hard time with
  • 20. Class Hierarchy Analysis (CHA) Can perform global analysis on currently loaded code Deduce stuff about inheritance, method overrides, etc. Can make optimization decisions based on assumptions Re-evaluate assumptions when loading new classes Throw away code that conflicts with assumptions before class loading makes them invalid ©2012 Azul Systems, Inc.
  • 21. Inlining works without “final” public class Thing { private int x; public int getX() { return x }; } ... myX = thing.getX(); Is the same as Class Thing { int x; } ... myX = thing.x; As long as there is only one implementer of getX() ©2012 Azul Systems, Inc.
  • 22. Speculative stuff The power of the “uncommon trap” Being able throw away wrong code is very useful Speculatively assuming callee type polymorphic can be “monomorphic” or “megamorphic” Can make virtual calls static even without CHA Can speculatively inline things without CHA Speculatively assuming branch behavior We’ve only ever seen this thing go one way, so.... ©2012 Azul Systems, Inc.
  • 23. Adaptive compilation make cleaner code practical Reduces need to trade off clean design against speed E.g. “final” should be used on methods only when you want to prohibit extension, overriding. Has no effect on speed. E.g. branching can be written “naturally” ©2012 Azul Systems, Inc.
  • 25. Adaptive compilation is... adaptive Measuring actual behavior is harder Micro-benchmarking is an art JITs are moving target “Warmup” techniques can often fail ©2012 Azul Systems, Inc.
  • 26. Warmup problems Common Example: Trading system wants to have the first trade be fast So run 20,000 “fake” messages through the system to warm up let JIT compilers optimize code, and deopt before actual trades What really happens Code is written to do different things “if this is a fake message” e.g. “Don’t send to the exchange if this is a fake message” JITs optimize for fake path, including speculatively assuming “fake” First real message through deopts... ©2012 Azul Systems, Inc.
  • 27. Warmup tips... (to get “first real thing” to be fast) System should not distinguish between “real” and “fake” Make that an external concern Avoid all conditional code based on “fake” Avoid all class-specific calls based on “fake” (“object oriented ifs”). Use “real” input sources and output targets instead Have system output to a sink target that knows it is “fake” Make output decisions based on data, not code E.g. array of sink targets, with array index computed from message payload ©2012 Azul Systems, Inc.
  • 29. Ordering of operations Within a thread, it’s trivial : “happens before” Across threads? News flash: CPU memory ordering doesn’t matter There is a much bigger culprit at play: Compilers The code will be reordered before your cpu ever sees it The only ordering rules that matter are the JMM ones. Intuitive read: “Happens before holds within threads” Things can move into, but not out of synchronized blocks Volatile stuff is a bit more tricky... ©2012 Azul Systems, Inc.
  • 30. Ordering of operations Source: http:/ /g.oswego.edu/dl/jmm/cookbook.html ©2012 Azul Systems, Inc.
  • 32. Most of what People seem to “know” about Garbage Collection is wrong In many cases, it’s much better than you may think GC is extremely efficient. Much more so that malloc() Dead objects cost nothing to collect GC will find all the dead objects (including cyclic graphs) ... In many cases, it’s much worse than you may think Yes, it really does stop for ~1 sec per live GB (in most JVMs). No, GC does not mean you can’t have memory leaks No, those pauses you eliminated from your 20 minute test are not gone ... ©2012 Azul Systems, Inc.
  • 33. Generational Collection Weak Generational Hypothesis; “most objects die young” Focus collection efforts on young generation: Use a moving collector: work is linear to the live set The live set in the young generation is a small % of the space Promote objects that live long enough to older generations Only collect older generations as they fill up “Generational filter” reduces rate of allocation into older generations Tends to be (order of magnitude) more efficient Great way to keep up with high allocation rate Practical necessity for keeping up with processor throughput ©2012 Azul Systems, Inc.
  • 34. GC Efficiency: Empty memory vs. CPU ©2012 Azul Systems, Inc.
  • 35. CPU% Heap size vs. GC CPU % 100% Live set Heap size
  • 36. Two Intuitive limits If we had exactly 1 byte of empty memory at all times, the collector would have to work “very hard”, and GC would take 100% of the CPU time If we had infinite empty memory, we would never have to collect, and GC would take 0% of the CPU time GC CPU % will follow a rough 1/x curve between these two limit points, dropping as the amount of memory increases. ©2012 Azul Systems, Inc.
  • 37. Empty memory needs (empty memory == CPU power) The amount of empty memory in the heap is the dominant factor controlling the amount of GC work For both Copy and Mark/Compact collectors, the amount of work per cycle is linear to live set The amount of memory recovered per cycle is equal to the amount of unused memory (heap size) - (live set) The collector has to perform a GC cycle when the empty memory runs out A Copy or Mark/Compact collector’s efficiency doubles with every doubling of the empty memory ©2012 Azul Systems, Inc.
  • 38. What empty memory controls Empty memory controls efficiency (amount of collector work needed per amount of application work performed) Empty memory controls the frequency of pauses (if the collector performs any Stop-the-world operations) Empty memory DOES NOT control pause times (only their frequency) In Mark/Sweep/Compact collectors that pause for sweeping, more empty memory means less frequent but LARGER pauses ©2012 Azul Systems, Inc.
  • 39. GC and latency: That pesky stop-the-world thing ©2012 Azul Systems, Inc.
  • 40. Delaying the inevitable Some form of copying/compaction is inevitable in practice And compacting anything requires scanning/fixing all references to it Delay tactics focus on getting “easy empty space” first This is the focus for the vast majority of GC tuning Most objects die young [Generational] So collect young objects only, as much as possible. Hope for short STW. But eventually, some old dead objects must be reclaimed Most old dead space can be reclaimed without moving it [e.g. CMS] track dead space in lists, and reuse it in place But eventually, space gets fragmented, and needs to be moved Much of the heap is not “popular” [e.g. G1, “Balanced”] A non popular region will only be pointed to from a small % of the heap So compact non-popular regions in short stop-the-world pauses But eventually, popular objects and regions need to be compacted ©2012 Azul Systems, Inc.
  • 41. Memory use How many of you use heap sizes of: F more than ½ GB? F more than 1 GB? F more than 2 GB? F more than 4 GB? F more than 10 GB? F more than 20 GB? F more than 50 GB? ©2012 Azul Systems, Inc.
  • 42. Reality check: servers in 2012 Retail prices, major web server store (US $, Oct 2012) 24 vCore, 128GB server ≈ $5K 24 vCore, 256GB server ≈ $8K 32 vCore, 384GB server ≈ $14K 48 vCore, 512GB server ≈ $19K 64 vCore, 1TB server ≈ $36K Cheap (< $1/GB/Month), and roughly linear to ~1TB 10s to 100s of GB/sec of memory bandwidth
  • 43. The Application Memory Wall A simple observation: Application instances appear to be unable to make effective use of modern server memory capacities The size of application instances as a % of a server’s capacity is rapidly dropping ©2012 Azul Systems, Inc.
  • 44. How much memory do applications need? “640KB ought to be enough for anybody” WRONG! So what’s the right number? 6,400K? 64,000K? 640,000K? 6,400,000K? 64,000,000K? There is no right number Target moves at 50x-100x per decade ©2012 Azul Systems, Inc. “I've said some stupid things and some wrong things, but not that. No one involved in computers would ever say that a certain amount of memory is enough for all time …” - Bill Gates, 1996
  • 45. “Tiny” application history Assuming Moore’s Law means: 2010 “transistor counts grow at ≈2x every ≈18 months” It also means memory size grows ≈100x every 10 years 1990 1980 ??? GB apps on 256 GB Application Memory Wall 2000 1GB apps on a 2 – 4 GB server 10MB apps on a 32 – 64 MB server 100KB apps on a ¼ to ½ MB Server “Tiny”: would be “silly” to distribute ©2012 Azul Systems, Inc.
  • 46. What is causing the Application Memory Wall? Garbage Collection is a clear and dominant cause There seem to be practical heap size limits for applications with responsiveness requirements [Virtually] All current commercial JVMs will exhibit a multi-second pause on a normally utilized 2-6GB heap. It’s a question of “When” and “How often”, not “If”. GC tuning only moves the “when” and the “how often” around Root cause: The link between scale and responsiveness ©2012 Azul Systems, Inc.
  • 47. The problems that need solving (areas where the state of the art needs improvement) Robust Concurrent Marking In the presence of high mutation and allocation rates Cover modern runtime semantics (e.g. weak refs, lock deflation) Compaction that is not monolithic-stop-the-world E.g. stay responsive while compacting ¼ TB heaps Must be robust: not just a tactic to delay STW compaction [current “incremental STW” attempts fall short on robustness] Young-Gen that is not monolithic-stop-the-world Stay responsive while promoting multi-GB data spikes Concurrent or “incremental STW” may both be ok Surprisingly little work done in this specific area ©2012 Azul Systems, Inc.
  • 49. Incontinuities in Java platform execution Hiccups"by"Time"Interval" Max"per"Interval" 99%" 99.90%" 99.99%" Max" Hiccup&Dura*on&(msec)& 1800" 1600" 1400" 1200" 1000" 800" 600" 400" 200" 0" 0" 200" 400" 600" 800" 1000" 1200" 1400" 1600" 1800" &Elapsed&Time&(sec)& Hiccups"by"Percen@le"Distribu@on" 1800" Hiccup&Dura*on&(msec)& 1600" Max=1665.024& 1400" 1200" 1000" 800" 600" 400" 200" 0" ©2012 Azul Systems, Inc. 0%" 90%" 99%" & 99.9%" & Percen*le& 99.99%" 99.999%"
  • 50. jHiccup A tool for capturing and displaying platform hiccups Records any observed non-continuity of the underlying platform Plots results in simple, consistent format Simple, non-intrusive As simple as adding the word “jHiccup” to your java launch line % jHiccup java myflags myApp (Or use as a java agent) Adds a background thread that samples time @ 1000/sec Open Source Released to the public domain, creative commons CC0 ©2012 Azul Systems, Inc.
  • 51. Telco App Example Max Time per Hiccups&by&Time&Interval& interval Max"per"Interval" 99%" 99.90%" 99.99%" Max" Hiccup&Dura*on&(msec)& 140" 120" 100" 80" 60" 40" 20" 0" 0" 500" 1000" 1500" 2000" 2500" &Elapsed&Time&(sec)& Optional SLA Hiccups&by&Percen*le&Distribu*on& plotting Hiccup&Dura*on&(msec)& 120" duration at 100" 80" percentile 60" levels 40" 20" 0" 0%" 90%" 99%" & 99.9%" & Percen*le& Hiccups"by"Percen?le" ©2012 Azul Systems, Inc. Hiccup 140" 99.99%" SLA" 99.999%" 99.9999%"
  • 53. Idle App on Quiet System Idle App on Busy System Hiccups&by&Time&Interval& Max"per"Interval" 99%" 99.90%" Hiccups&by&Time&Interval& 99.99%" Max" Max"per"Interval" 20" 15" 10" 5" 0" Max" 50" 40" 30" 20" 10" 100" 200" 300" 400" 500" 600" 700" 800" 900" 0" 100" 200" &Elapsed&Time&(sec)& 300" 400" 500" 600" 700" 800" 900" &Elapsed&Time&(sec)& Hiccups&by&Percen*le&Distribu*on& Hiccups&by&Percen*le&Distribu*on& 25" 60" Max=22.336& 20" Hiccup&Dura*on&(msec)& Hiccup&Dura*on&(msec)& 99.99%" 0" 0" 15" 10" 5" 0" 99.90%" 60" Hiccup&Dura*on&(msec)& Hiccup&Dura*on&(msec)& 25" 99%" 0%" ©2012 Azul Systems, Inc. 90%" 99%" & 99.9%" 99.99%" 99.999%" & Percen*le& 50" Max=49.728& 40" 30" 20" 10" 0" 0%" 90%" 99%" & 99.9%" & Percen*le& 99.99%" 99.999%"
  • 54. Idle App on Quiet System Idle App on Dedicated System Hiccups&by&Time&Interval& Max"per"Interval" 99%" 99.90%" Hiccups&by&Time&Interval& 99.99%" Max" Max"per"Interval" 20" 15" 10" 5" 0" 99.99%" Max" 0.4" 0.35" 0.3" 0.25" 0.2" 0.15" 0.1" 0.05" 0" 0" 100" 200" 300" 400" 500" 600" 700" 800" 900" 0" 100" 200" &Elapsed&Time&(sec)& 300" 400" 500" 600" 700" 800" 900" &Elapsed&Time&(sec)& Hiccups&by&Percen*le&Distribu*on& Hiccups&by&Percen*le&Distribu*on& 25" 0.45" Max=22.336& 20" Hiccup&Dura*on&(msec)& Hiccup&Dura*on&(msec)& 99.90%" 0.45" Hiccup&Dura*on&(msec)& Hiccup&Dura*on&(msec)& 25" 99%" 15" 10" 5" 0.4" Max=0.411& 0.35" 0.3" 0.25" 0.2" 0.15" 0.1" 0.05" 0" 0%" ©2012 Azul Systems, Inc. 90%" 99%" & 99.9%" 99.99%" 99.999%" & Percen*le& 0" 0%" 90%" 99%" & 99.9%" & Percen*le& 99.99%" 99.999%"
  • 55. EHCache: 1GB data set under load Hiccups"by"Time"Interval" Max"per"Interval" 99%" 99.90%" 99.99%" Max" Hiccup&Dura*on&(msec)& 4000" 3500" 3000" 2500" 2000" 1500" 1000" 500" 0" 0" 500" 1000" 1500" 2000" &Elapsed&Time&(sec)& Hiccups"by"Percen@le"Distribu@on" Hiccup&Dura*on&(msec)& 4000" 3500" Max=3448.832& 3000" 2500" 2000" 1500" 1000" 500" 0" ©2012 Azul Systems, Inc. 0%" 90%" 99%" & 99.9%" & Percen*le& 99.99%" 99.999%" 99.9999%"
  • 56. Fun with jHiccup ©2012 Azul Systems, Inc.
  • 57. Oracle HotSpot CMS, 1GB in an 8GB heap Zing 5, 1GB in an 8GB heap Hiccups&by&Time&Interval& Max"per"Interval" 99%" 99.90%" Hiccups&by&Time&Interval& 99.99%" Max" Max"per"Interval" 12000" 10000" 8000" 6000" 4000" 2000" 0" 2000" 2500" Max" 20" 15" 10" 5" 500" 1000" 1500" 2000" 2500" 3000" 3500" 0" 500" 1000" &Elapsed&Time&(sec)& 1500" 3000" 3500" &Elapsed&Time&(sec)& Hiccups&by&Percen*le&Distribu*on& Hiccups&by&Percen*le&Distribu*on& 14000" 25" Max=13156.352& 12000" Hiccup&Dura*on&(msec)& Hiccup&Dura*on&(msec)& 99.99%" 0" 0" 10000" 8000" 6000" 4000" 2000" 0" 99.90%" 25" Hiccup&Dura*on&(msec)& Hiccup&Dura*on&(msec)& 14000" 99%" 0%" ©2012 Azul Systems, Inc. 90%" 99%" &99.9%" & Percen*le& 99.99%" 99.999%" 20" Max=20.384& 15" 10" 5" 0" 0%" 90%" 99%" & 99.9%" & Percen*le& 99.99%" 99.999%" 99.9999%"
  • 58. Oracle HotSpot CMS, 1GB in an 8GB heap Zing 5, 1GB in an 8GB heap Hiccups&by&Time&Interval& Max"per"Interval" 99%" 99.90%" Hiccups&by&Time&Interval& 99.99%" Max" Max"per"Interval" 12000" 99.99%" Max" 12000" 10000" 8000" 6000" 4000" 2000" 0" 10000" 8000" 6000" 4000" 2000" 0" 0" 500" 1000" 1500" 2000" 2500" 3000" 3500" 0" 500" 1000" &Elapsed&Time&(sec)& 2000" 2500" 3000" 3500" Hiccups&by&Percen*le&Distribu*on& 14000" 14000" Hiccup&Dura*on&(msec)& Max=13156.352& 12000" 10000" 8000" 6000" 4000" 2000" 0" 1500" &Elapsed&Time&(sec)& Hiccups&by&Percen*le&Distribu*on& Hiccup&Dura*on&(msec)& 99.90%" 14000" Hiccup&Dura*on&(msec)& Hiccup&Dura*on&(msec)& 14000" 99%" 0%" 90%" 99%" &99.9%" & Percen*le& 99.99%" 99.999%" 12000" 10000" 8000" 6000" 4000" 2000" 0" 0%" 90%" Max=20.384& Drawn to scale ©2012 Azul Systems, Inc. 99%" & 99.9%" 99.99%" & Percen*le& 99.999%" 99.9999%"
  • 59. What you can expect (from Zing) in the low latency world Assuming individual transaction work is “short” (on the order of 1 msec), and assuming you don’t have 100s of runnable threads competing for 10 cores... “Easily” get your application to < 10 msec worst case With some tuning, 2-3 msec worst case Can go to below 1 msec worst case... May require heavy tuning/tweaking Mileage WILL vary ©2012 Azul Systems, Inc.
  • 60. Oracle HotSpot (pure newgen) Zing Hiccups&by&Time&Interval& Max"per"Interval" 99%" 99.90%" Hiccups&by&Time&Interval& 99.99%" Max" Max"per"Interval" 20" 15" 10" 5" 0" 99.99%" Max" 1.6" 1.4" 1.2" 1" 0.8" 0.6" 0.4" 0.2" 0" 0" 100" 200" 300" 400" 500" 600" 0" 100" 200" &Elapsed&Time&(sec)& 300" 400" 500" 600" &Elapsed&Time&(sec)& Hiccups&by&Percen*le&Distribu*on& Hiccups&by&Percen*le&Distribu*on& 25" 1.8" Max=22.656& 20" Hiccup&Dura*on&(msec)& Hiccup&Dura*on&(msec)& 99.90%" 1.8" Hiccup&Dura*on&(msec)& Hiccup&Dura*on&(msec)& 25" 99%" 15" 10" 5" 1.6" Max=1.568& 1.4" 1.2" 1" 0.8" 0.6" 0.4" 0.2" 0" 0%" 90%" & 99%" 99.9%" & 99.99%" 99.999%" Percen*le& 0" 0%" 90%" & 99%" Low latency trading application ©2012 Azul Systems, Inc. 99.9%" & 99.99%" Percen*le& 99.999%"
  • 61. Oracle HotSpot (pure newgen) Zing Hiccups&by&Time&Interval& Max"per"Interval" 99%" 99.90%" Hiccups&by&Time&Interval& 99.99%" Max" Max"per"Interval" 20" 15" 10" 5" 0" Max" 20" 15" 10" 5" 100" 200" 300" 400" 500" 600" 0" 100" 200" &Elapsed&Time&(sec)& 300" 400" 500" 600" &Elapsed&Time&(sec)& Hiccups&by&Percen*le&Distribu*on& Hiccups&by&Percen*le&Distribu*on& 25" 25" Max=22.656& 20" Hiccup&Dura*on&(msec)& Hiccup&Dura*on&(msec)& 99.99%" 0" 0" 15" 10" 5" 0" 99.90%" 25" Hiccup&Dura*on&(msec)& Hiccup&Dura*on&(msec)& 25" 99%" 0%" 90%" & 99%" 99.9%" & 99.99%" 99.999%" Percen*le& 20" 15" 10" 5" 0" Max=1.568& 0%" 90%" & 99%" Low latency - Drawn to scale ©2012 Azul Systems, Inc. 99.9%" & 99.99%" Percen*le& 99.999%"