List of articles
- Preface
- HashMap Data structure of
- Deep source
-
- Two parameters
- Member variables
- Four construction methods
- How to insert data :put()
- hash function :hash()
- Dynamic capacity :resize()
- Node tree 、 The split of red and black trees
-
- Node tree
- Red and black tree split
- summary
- Experience
Preface
Today we are going to study Java The set class is more commonly used in HashMap.
Another explanation , In this paper, the HashMap Source code is based on Jdk1.8 Version of , If not specified , After the collection class source code analysis is 1.8 Version of .
HashMap Data structure of
open HashMap The source code file , You can see that it is inherited from AbstractMap, And implemented Java The root interface of the collection Map, as well as Cloneable and Serializable Interface , therefore HashMap Can be serialized .
HashMap The underlying structure of hash table is the concrete implementation of hash table , Through the corresponding hash operation, we can quickly query the position of the target element in the table , Have very fast query speed , therefore ,HashMap It is widely used in daily development . Ideally, one element corresponds to one Hash value , This kind of query effect is optimal .
But actually it’s impossible , Because the hash table exists “hash ( Hash ) Conflict “ The problem of . Happen when hash When the conflict ,HashMap use “ Zipper method “ Solve , That is, the structure of array and linked list . stay HashMap In the code comments of , The elements in the array are used with “bucket” ( Chinese reads as bucket ) To call , And the function of the hash function is to key Address to buckets One position in , If one bucket There are multiple elements , Then it will be stored in the form of a linked list (jdk1.8 It was just like this before ).
This is a HashMap The storage structure of :
About “ Zipper method ” and “hash Conflict ” If you have any questions, please read my previous article data structure : Hash table and the solution of hash conflict .
For convenience , Below the “bucket“ Use both “ bucket “ replace .
Deep source
Two parameters
Before learning the source code , We need to know two things first HashMap Two important parameters in ,“ Initial capacity ” and “ Load factor ”,
Initial capacity is the number of exponential groups . The loading factor determines HashMap What proportion of the elements in can be expanded (rehash), When HashMap After the number of elements exceeds the product of the load factor and the current capacity , We need to expand the hash table .
stay HashMap in , The default load factor is 0.75, This is the combination time 、 The compromise scheme after considering the balance of space cost , because If the loading factor is too large, the possibility of conflict will increase , The efficiency of searching is low ; Too small words often rehash, Reduce performance . When setting the initial capacity, you should consider the number of entries required in the mapping and its load factor , In order to minimize it rehash Operating frequency . If the initial capacity is greater than the maximum number of entries divided by the load factor , It doesn’t happen rehash operation .
Member variables
Okay , So much has been said before , Now start to learn more about the source code , So let’s see HashMap The main member variables of :
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4;
static final int MAXIMUM_CAPACITY = 1 << 30;
static final float DEFAULT_LOAD_FACTOR = 0.75F;
static final int TREEIFY_THRESHOLD = 8;
static final int MIN_TREEIFY_CAPACITY = 64;
transient HashMap.Node<K, V>[] table;
transient Set<Entry<K, V>> entrySet;
transient int size;
int threshold;
final float loadFactor;
It can be seen that ,HashMap There are many main member variables , Some variables also initialize values , Let’s explain it one by one .
DEFAULT_INITIAL_CAPACITY: Default initial capacity 1 << 4 , That is to say 16, Must be 2 Integer power of .
DEFAULT_LOAD_FACTOR: Default loading factor , The size is 0.75.
MAXIMUM_CAPACITY: The maximum capacity , 2^ 30 Power .
TREEIFY_THRESHOLD : Tree threshold , Greater than this number is treelized , That is to say, turn into a red and black tree .
MIN_TREEIFY_CAPACITY: Tree shape minimum capacity .
table: Link array of hash table , The subscript of the bucket .
entrySet: Key value pair set .
size: The number of key-value pairs , That is to say HashMap Size .
threshold: threshold , Next time you need to expand the capacity , be equal to Capacity * Load factor .
loadFactor: Load factor .
Introducing play variables , Let’s introduce HashMap Construction method of .
Four construction methods
HashMap There are four construction methods , The code is as follows :
// Load factor for default size
public HashMap() {
this.loadFactor = DEFAULT_LOAD_FACTOR;
}
// Load factor for default size , And create a content as a parameter m Hash table of the contents of
public HashMap(Map<? extends K, ? extends V> m) {
this.loadFactor = DEFAULT_LOAD_FACTOR;
// Add the entire collection
putMapEntries(m, false);
}
// Specify capacity and load factor
public HashMap(int initialCapacity, float loadFactor) {
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal initial capacity: " +
initialCapacity);
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal load factor: " +
loadFactor);
this.loadFactor = loadFactor;
// Set the threshold according to the specified capacity
this.threshold = tableSizeFor(initialCapacity);
}
// Specified capacity , Load factor default size
public HashMap(int initialCapacity) {
this(initialCapacity, DEFAULT_LOAD_FACTOR);
}
It’s not hard to find out , The third constructor above allows you to customize the load factor and capacity , First, determine whether the input load factor meets the requirements , And then according to the established capacity tableSizeFor() Method , It specifies thresholds based on capacity , Why do we have to take this step further ?
because buckets The size of the array is constrained for the entire HashMap It’s all important , To prevent the introduction of a not 2 The power of the integer , There must be precautions .tableSizeFor()
The function tries to fix an integer , And convert to the nearest to the integer 2 The next power .
static final int tableSizeFor(int cap) {
int n = cap - 1;
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}
For example, pass in an integer 244, After displacement , Or operation will return the most recent 2 The next power 256
How to insert data :put()
The most common operation in a collection is to store data , It’s the process of inserting elements , stay HashMap in , Insert data with put() Method .
public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);
}
put Method does not do redundant operations , Just incoming key and value also hash value Enter into putVal Method and return the corresponding value , Click to enter the method , Follow up the source code step by step :
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
Node<K,V>[] tab; Node<K,V> p; int n, i;
// If the hash table is empty , Just do the expansion operation resize()
if ((tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).length;
// There is no element in the position to insert , Create a new one containing key The node of
if ((p = tab[i = (n - 1) & hash]) == null)
tab[i] = newNode(hash, key, value, null);
// If the bucket to be inserted already has elements , Replace
else {
Node<K,V> e; K k;
//key The position to insert collides , Give Way e Point to p
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
e = p;
// No collision , however p It's a node belonging to the red black tree , perform putTreeVal() Method
else if (p instanceof TreeNode)
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
//p It's a list node , Traversing the linked list , Find and replace
else {
// Traversal array , If the length of the list reaches 8, Convert to a red-black tree
for (int binCount = 0; ; ++binCount) {
if ((e = p.next) == null) {
p.next = newNode(hash, key, value, null);
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
treeifyBin(tab, hash);
break;
}
// Find the target node , Exit loop ,e Point to p
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
break;
p = e;
}
}
// Node already exists , Replace value, And return to the old value
if (e != null) { // existing mapping for key
V oldValue = e.value;
if (!onlyIfAbsent || oldValue == null)
e.value = value;
afterNodeAccess(e);
return oldValue;
}
}
++modCount;
// If the threshold is exceeded , We have to expand it
if (++size > threshold)
resize();
afterNodeInsertion(evict);
return null;
}
The code looks a little complicated , The parameters are a little messy , But it’s easier to understand the logic , The general logic of the source code is as follows :
- First call hash() Method to calculate the hash value
- And then call putVal() Method according to the hash value of the relevant operation
- If at present Hash table content is empty , Do expansion
- If there is no element in the bucket to insert , Create a new node and put it in
- Otherwise, start with the first element in the bucket to be inserted ( Why is this the first element , I’ll talk about it later )
- If there is no collision , Assign a value to e, Find the end
- A collision , What’s more, the current use is still Nodes of the red black tree , call putTreeVal() Insert
- If the list node is found from the traditional list array 、 Find the assignment to e, end
- If the length of the list reaches 8, Convert to a red-black tree
- Finally, check whether the expansion is needed
put There are several key methods in the code of the method , Namely :
- hash(): hash function , Calculation key Corresponding position
- resize(): Capacity expansion
- putTreeVal(): Insert the node of the red black tree
- treeifyBin(): Tree shaped containers
The first two are HashMap The core method of bucket linked list operation , The latter method is Jdk1.8 After that, the operation of red and black trees , I’ll talk about it later , Let’s start with the first two methods .
hash function :hash()
hash() The method is HashMap The core function in , When storing data , take key The operation is carried out in the input , obtain key Hash value of , Only by this hash value operation can we get key It should be placed in “ bucket ” Which position of , The following is the source of the method :
static final int hash(Object key) {
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
You can see it in the source code , Pass in key after ,hash() Will get key Of hashCode Move to the right without a sign 16 position , And then do bitwise XOR , And return the calculated value to , This value right here is key Hash value of . This operation is to reduce collisions , Because most of the elements hashCode It’s the same in the low post , It’s easy to cause conflict if you don’t deal with it .
After that, we need to put hash() The return value of is the same as table.length - 1
Do with the operation , The result is the subscript of the array ( Why is that so , Next I’ll say ), Above putVal() You can see this code operation in the method , For example :
table.length - 1
It’s like a low bit mask ( This design also optimizes the performance of the expansion operation ), It and hash()
When doing and operating, the high position will be shielded ( Because a HashMap There can’t be a particularly large buckets Array , At least it’s impossible until it’s automatically expanded , therefore table.length - 1
Most of the high positions of are 0), Keep only the low position , In this way, only the lowest ones are always effective , Even your hashCode()
No matter how well implemented, it’s hard to avoid collisions . At this time ,hash()
The value of the function is reflected , It’s right hash code The low order adds randomness and mixes some of the features of the high order , It significantly reduces the occurrence of collisions .
in addition , stay putVal Method , We can see that there is a piece of code
if ((p = tab[i = (n - 1) & hash]) == null)
tab[i] = newNode(hash, key, value, null);
The note above also states , This is to detect whether there are elements in the position to be inserted , If not, create a new one containing key The node of , So why do we use i = (n - 1) & hash
As an index operation ?
The following explanation is taken from http://www.importnew.com/29724.html
This is actually an optimization tool , Because the size of the array is always one 2 The next power , After the expansion , The new index of an element is either in its original position , Or add the capacity before expansion to the original location . The cleverness of this method lies in & operation , As mentioned before & Operations only focus on n
– 1(n =
The length of the array ) The effective bit , After expansion ,n There will be one more significant bit more than before (n It’s going to double what it was before , So make sure the array length is always 2 The power is important ), And then just judge hash The position of the new significant bit is 0 still 1 You can calculate the new index position , If it is 0, So the index doesn’t change , If it is 1, The index is the original index plus the capacity before expansion .
The renderings are as follows :
In this way, you don’t have to recalculate every time you expand hash, Save a lot of time , And the new significant bit is 0 still 1 It’s random , The first two collided Entry It is also possible to spread evenly again during expansion , It’s a wonderful design .
Dynamic capacity :resize()
stay HashMap in , When the array is initialized or the number of added elements exceeds the threshold, it will trigger resize() Method , Its function is dynamic expansion , The following is the source of the method :
final Node<K,V>[] resize() {
Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
//‘ bucket ’ The size of the array exceeds 0, Do expansion
if (oldCap > 0) {
// Beyond the maximum, there is no expansion , Set the threshold to int The maximum number of
if (oldCap >= MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return oldTab;
}
// Move to the left 1 The bit expands to the original 2 times
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
oldCap >= DEFAULT_INITIAL_CAPACITY)
newThr = oldThr << 1; // double threshold
}
// The size of the old array was 0, Old threshold >0, The hash table was created but no element was added , The initialization capacity is equal to the threshold
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
else { // zero initial threshold signifies using defaults
// Old capacity 、 The old thresholds are 0, Indicates that the hash table has not been created , Capacity is the default capacity , The threshold for Capacity * Load factor
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
// The new threshold has no value yet , According to the new capacity newCap Calculate the size
if (newThr == 0) {
float ft = (float)newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
(int)ft : Integer.MAX_VALUE);
}
threshold = newThr;
@SuppressWarnings({"rawtypes","unchecked"})
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
table = newTab;
// Not empty , Represents the expansion operation
if (oldTab != null) {
// Traverse each of the old arrays ‘ bucket ’, Move to new array newTab
for (int j = 0; j < oldCap; ++j) {
Node<K,V> e;
if ((e = oldTab[j]) != null) {
oldTab[j] = null;
if (e.next == null)
newTab[e.hash & (newCap - 1)] = e;
// Nodes are tree nodes , You need to split up the red-black tree
else if (e instanceof TreeNode)
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
else { // preserve order
// Common link list nodes , Traversing the linked list , The linked list nodes are grouped in the original order
Node<K,V> loHead = null, loTail = null;
Node<K,V> hiHead = null, hiTail = null;
Node<K,V> next;
do {
next = e.next;
if ((e.hash & oldCap) == 0) {
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}
else {
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
if (loTail != null) {
loTail.next = null;
newTab[j] = loHead;
}
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
The source code above is a little long , But the overall logic is three steps :
- Calculate the capacity of the new bucket array newCap And a new threshold newThr
- Based on the calculation newCap Create a new bucket array , And initialize the bucket array table
- Remap the key-value pair node to the new bucket array . If the node is TreeNode type , You need to split the dividend black tree ( call **split()** Method ). If it’s a normal node , The nodes are grouped in their original order .
The logic of the first two steps is relatively simple , There is not much to be said here . The point is the third point , It’s about the splitting of red and black trees , This is because after the expansion , The bucket array has changed a lot , More black elements on the original tree need to be split again , Mapping to a linked list , Prevent too many elements in a single bucket .
The splitting of the red black tree is to call TreeNode.split() To achieve , I don’t talk about it alone . The red and black trees in the back are analyzed together .
Node tree 、 The split of red and black trees
The introduction of red and black trees is HashMap stay Jdk1.8 The biggest change after that , stay 1.8 before ,HashMap The data structure of is array and linked list , The linked list of a bucket may be too long because of too much data , Traversal is inefficient ,1.8 after ,HashMap The length of the list has been processed , When the chain length exceeds 8 when , Automatically convert to red black tree , Effectively improved HashMap Performance of .
But the introduction of red black tree also makes the code complexity increased a lot , Added operation method of red black tree . This paper only analyzes these methods , It’s not about the red and black trees themselves , Readers who want to know more about the red black tree can read my previous article
data structure : An analysis of the structure and methods of mangrove ( On ) as well as data structure : An analysis of the structure and methods of mangrove ( Next )
Node tree
HashMap In the tree node code with TreeNode Express :
static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> {
TreeNode<K,V> parent; // red-black tree links
TreeNode<K,V> left;
TreeNode<K,V> right;
TreeNode<K,V> prev; // needed to unlink next upon deletion
boolean red;
TreeNode(int hash, K key, V val, Node<K,V> next) {
super(hash, key, val, next);
}
You can see that it’s a red black tree node , There’s a father 、 Left and right children 、 The node of the previous element , There’s also a color value . After knowing the structure of the node , Let’s take a look at some of the operations of red and black trees .
Let’s first analyze the tree code :
// Transform the common linked list into a tree node linked list
final void treeifyBin(Node<K,V>[] tab, int hash) {
int n, index; Node<K,V> e;
// The capacity of bucket array is less than MIN_TREEIFY_CAPACITY, Give priority to expansion rather than treeing
if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
resize();
else if ((e = tab[index = (n - 1) & hash]) != null) {
TreeNode<K,V> hd = null, tl = null;
do {
// Convert a node to a tree node
TreeNode<K,V> p = replacementTreeNode(e, null);
if (tl == null)
// Assign the transformed head node to hd
hd = p;
else {
p.prev = tl;
tl.next = p;
}
tl = p;
} while ((e = e.next) != null);
if ((tab[index] = hd) != null)
// The tree node is not empty , Convert to red and black trees
hd.treeify(tab);
}
}
TreeNode<K,V> replacementTreeNode(Node<K,V> p, Node<K,V> next) {
return new TreeNode<>(p.hash, p.key, p.value, next);
}
The code above is not too complicated , The general logic is based on hash The number of elements in the table determines whether it needs to be expanded or treelized , Then call different code execution in turn .
It is worth noting that , In judging whether the container needs to be treelized, the length of the list should be greater than or equal to MIN_TREEIFY_CAPACITY
, I said that before , It is HashMap Member variables of , The initial value is 64, So why should this condition be met before treeing ?
Here’s an excerpt from https://segmentfault.com/a/1190000012926722#articleHeader6
When the bucket array capacity is small , Key value pair node hash
The collision rate of may be higher , This leads to a long list . At this time, we should give priority to capacity expansion , It’s not a tree . After all, the high collision rate is caused by the small capacity of the bucket array , This is the main reason . Capacity hours , Priority expansion can avoid unnecessary treeing of some columns . meanwhile , Small barrel capacity , Expansion will be more frequent , When expanding, you need to split the red black tree and remap . So when the bucket capacity is relatively small , It’s a thankless thing to turn a long list into a red black tree .
therefore ,HashMap The treeling process also takes into account the container performance as much as possible , Look back at the code above , Before the linked list is treelized, the node is transformed into a tree node , Then call treeify() Convert to red and black trees , And the tree node TreeNode Inherited from Node class , therefore TreeNode It still contains next quote , The node sequence of the original list is finally passed next Quotations are preserved .
Let’s take a look at the process of converting red and black trees :
final void treeify(Node<K,V>[] tab) {
TreeNode<K,V> root = null;
for (TreeNode<K,V> x = this, next; x != null; x = next) {
next = (TreeNode<K,V>)x.next;
x.left = x.right = null;
if (root == null) { // First entry cycle , Determine the head node , And it's black
x.parent = null;
x.red = false;
root = x;
}
else { // The logic of going back and forth ,x Point to a node in the tree
K k = x.key;
int h = x.hash;
Class<?> kc = null;
// Start at the root node , Traverse all nodes and current nodes x Compare , Adjusting position ,
for (TreeNode<K,V> p = root;;) {
int dir, ph;
K pk = p.key;
if ((ph = p.hash) > h) // When comparing the hash value of a node x Big time , dir by -1
dir = -1;
else if (ph < h) // Hash value ratio x Hours dir by 1
dir = 1;
else if ((kc == null &&
(kc = comparableClassFor(k)) == null) ||
(dir = compareComparables(kc, k, pk)) == 0)
// Compare nodes to x Of key
dir = tieBreakOrder(k, pk);
TreeNode<K,V> xp = p;
// hold The current node becomes x Father
// If the hash value ratio of the current comparison node x Big ,x It's the left child , otherwise x It's the right child
if ((p = (dir <= 0) ? p.left : p.right) == null) {
x.parent = xp;
if (dir <= 0)
xp.left = x;
else
xp.right = x;
root = balanceInsertion(root, x);
break;
}
}
}
}
moveRootToFront(tab, root);
}
You can see , The overall logic of the code is to compare the node in the tree with the current node , Then the position of nodes in the tree is determined , The details of the implementation are complex , It’s not unfolding here .
Red and black tree split
This paper introduces the treeing of nodes , Let’s learn how to split a red black tree ,HashMap After expansion , Ordinary nodes need to be remapped , Red black tree nodes are no exception .
When turning a common list into a red black tree ,HashMap By two extra quotes next and prev The node order of the original linked list is preserved . In this way, when remapping the red black tree , It can be done in the way of mapping the linked list . This avoids mapping the red black tree into a linked list , It improves the efficiency .
Let’s take a look at the source code of the split method :
//map The container itself
//tab Represents the hash table that holds the bucket head node
//index Indicates where to start trimming
//bit Number of digits to trim ( Hash value )
final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) {
TreeNode<K,V> b = this;
// Relink into lo and hi lists, preserving order
// Two linked lists after pruning , The following is used lo Trees and hi Instead of trees
TreeNode<K,V> loHead = null, loTail = null;
TreeNode<K,V> hiHead = null, hiTail = null;
int lc = 0, hc = 0;
for (TreeNode<K,V> e = b, next; e != null; e = next) {
next = (TreeNode<K,V>)e.next;
e.next = null;
// If the last bit of the current node hash value is equal to the one to be trimmed bit value , Used to distinguish which bucket is located
if ((e.hash & bit) == 0) {
// Put the node in lo The end of the tree
if ((e.prev = loTail) == null)
loHead = e;
else
loTail.next = e;
loTail = e;
++lc;
}
// Put the current node in hi Trees
else {
if ((e.prev = hiTail) == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
++hc;
}
}
if (loHead != null) {
// If loHead Not empty , And the length of the list is less than or equal to 6, Turn the red and black tree into a linked list
if (lc <= UNTREEIFY_THRESHOLD)
tab[index] = loHead.untreeify(map);
else {
tab[index] = loHead;
/*
* hiHead != null when , After the expansion ,
* Some nodes are not in their original positions , It needs to be retreestablished
*/
if (hiHead != null) // (else is already treeified)
loHead.treeify(tab);
}
}
// Similar to the above
if (hiHead != null) {
if (hc <= UNTREEIFY_THRESHOLD)
tab[index + bit] = hiHead.untreeify(map);
else {
tab[index + bit] = hiHead;
if (loHead != null)
hiHead.treeify(tab);
}
}
}
The logic of the source code is like this : After break up , Split the red and black tree into two pieces by TreeNode The list of components (hi Trees and lo Trees ). If the length of the list is less than UNTREEIFY_THRESHOLD, Then convert the list to a common list . Otherwise, according to the conditions, we will TreeNode Chain tree . Here are two pictures to show the changes before and after the split
Before the split of red and black trees :
After break up :
thus , This is the end of the introduction to some conversion operations of red and black trees , besides ,hashMap It also provides many ways to operate red and black trees , The principle is almost the same , Readers can study for themselves .
summary
HashMap The source code analysis will come to an end , Last , To sum up HashMap Some characteristics of :
1、HashMap allow key, value by null;
2、HashMap There is no synchronization in the source code , Multiple thread operations may cause thread safety problems , Suggest using Collections.synchronizedMap
To pack , Become thread safe Map, for example :
Map map = Collections.synchronizedMap(new HashMap<String,String>());
3、Jdk1.7 before , When HashMap When the structure of a bucket in is a linked list , The time complexity of traversal is O(n),1.8 after , If there are too many elements in the bucket, it will be converted into a red black tree , The time complexity of traversal is O(logn).
Experience
Last , Tell me what I’ve learned , honestly , Before writing this article , I am right. HashMap It’s just that understanding stays at the used level , No in-depth understanding of the source code , Until a whim to learn Java To see the collection class of HashMap Source code , After reading the source code , I was deeply shocked , Tell the truth , I never thought that the source code of the most common tool class is so complicated , One HashMap So much technical knowledge is involved , For example, red and black trees , Linked list conversion ,hash Operations, etc , These knowledge points are integrated through simple code , And it also guarantees HashMap High efficiency performance of . Tell the truth , I have great admiration for designers , I don’t think I can write such a cool code in my life . Finally, I can understand why so many companies like to ask the underlying implementation of the collection class when interviewing , Because the technical knowledge involved in the collection is very profound , If you can understand the source code of the collection class , The man can’t NB Do you ?
Last , Thank you for your technical articles
https://blog.csdn.net/u011240877/article/details/53351188
https://segmentfault.com/a/1190000012926722
http://www.importnew.com/29724.html