List of articles
- Preface
- One 、HashMap brief introduction
- Two 、HashMap Source code interpretation
-
- 1. Class definition
- 2. Constant definition
- 3. Inner class Node
- 4. Static tool method
- 5. Attribute variables
- 6. Constructors
- 7. A few conventional methods ( Incomplete )
- 8. Query methods get()
- 9. Insert / Update method put()
- 10. The treelization of barrels treeifyBin()
- 11. Capacity expansion resize()
- 12. Delete node remove()
- summary
Preface
HashMap It is a collection framework class commonly used in normal development , Understanding its internal structure and operation principle can be said to be every Java Programmers have to , Next, we will explore the source code HashMap What kind of class is it .
One 、HashMap brief introduction
HashMap yes java.util A collection framework class in package , It is java.util.Map Implementation class of , With convenience 、 Efficient access function based on key value pairs , The average query time complexity is O(1), Nonlinear safety .
HashMap It’s a hash table + Linked list + The implementation of data structure such as red black tree is based on key-value Access tool class , stay JDK1.8 There was no red black tree data structure before , stay JDK1.8 After that, it is optimized : Considering the amount of Hash The efficiency of link list query in collision is low , Therefore, the red black tree data structure is added to improve the query efficiency in this case , Through threshold control , Transform the list and the red black tree into each other . meanwhile JDK1.8 There’s another optimization , namely hash Optimization of disturbance function , stay JDK1.8 Before hash() Pair of functions key Of hash Four times , The aim is to reduce hash The possibility of collision , however JDK1.8 Then there was only one disturbance , The implementation is simplified .
Don’t talk much , Next, go directly to the source code analysis part
Two 、HashMap Source code interpretation
reminder : Reading needs patience ~~
1. Class definition
HashMap yes Map Class , At the same time inherited AbstractMap class 、Cloneable class 、Serializable class , The latter two iconic interfaces give it the ability to clone 、 The ability to serialize .
// HashMap class , Inherited from AbstractMap, Realized Map Interface
// And implemented two symbolic interfaces , It's given the ability to clone 、 The ability to serialize
public class HashMap<K,V> extends AbstractMap<K,V>
implements Map<K,V>, Cloneable, Serializable {
}
2. Constant definition
// serialize ID, As the only identification mark , For serialization and deserialization
private static final long serialVersionUID = 362498820763181265L;
// Default initialization capacity , by 16
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
// The maximum capacity :2 Of 30 Power
static final int MAXIMUM_CAPACITY = 1 << 30;
// Load factor , Use... For expansion
static final float DEFAULT_LOAD_FACTOR = 0.75f;
// Tree threshold of a bucket
// When the number of elements in the bucket exceeds this value , You need to replace the linked list node with a red black tree node
static final int TREEIFY_THRESHOLD = 8;
// A tree's link list restore threshold
// When expanding , The number of elements in the bucket is less than this value , Will put the barrel elements of the tree shape Restore ( segmentation ) Is a linked list structure
static final int UNTREEIFY_THRESHOLD = 6;
// Minimum tree size of hash table
// When the capacity in the hash table is greater than this value , The barrels in the table can be treelified
// Otherwise, if there are too many elements in the barrel, it will expand the capacity , Not treelization
// To avoid expansion 、 The conflict of tree selection ,
// This value cannot be less than 4 * TREEIFY_THRESHOLD
static final int MIN_TREEIFY_CAPACITY = 64;
3. Inner class Node
HashMap There are many inner classes in , such as Node、TreeNode、KeySet、Values、EntrySet、HashIterator etc. , Because this article only involves the operation of adding, deleting, modifying and checking ,Node class 、TreeNode Class is the main operation class , But because of the complexity of the red and black trees , Not the focus of this article , So for the time being, it’s just reading Node class .
Node Class is the node class stored in the linked list , For storage nodes hash、key、value Etc , Of course, there are references to the next node
// The actual storage node inner class , Can exist in the red black tree can also exist in the table
static class Node<K,V> implements Map.Entry<K,V> {
final int hash; // Of this node hash value
final K key; // key
V value; // value
Node<K,V> next; // Next node , Tree node or linked table node
// Constructors
Node(int hash, K key, V value, Node<K,V> next) {
this.hash = hash;
this.key = key;
this.value = value;
this.next = next;
}
// Get keys and values and overridden toString Such method
public final K getKey() {
return key; }
public final V getValue() {
return value; }
public final String toString() {
return key + "=" + value; }
// rewrite hashCode Method
public final int hashCode() {
// Back to key hash And worthy hash The result of seeking difference or , Guarantee nodes hash Uniqueness
return Objects.hashCode(key) ^ Objects.hashCode(value);
}
// Set the value of the node , Return old value
public final V setValue(V newValue) {
V oldValue = value;
value = newValue;
return oldValue;
}
// rewrite equals Method
public final boolean equals(Object o) {
// If the memory address of this class is the same as that of this class, it will directly return true
if (o == this)
return true;
// Judge whether it is Map.Entry Class of type
// Then compare them separately key and value Are they the same?
// If all are the same, return true, Otherwise return to false
if (o instanceof Map.Entry) {
Map.Entry<?,?> e = (Map.Entry<?,?>)o;
if (Objects.equals(key, e.getKey()) &&
Objects.equals(value, e.getValue()))
return true;
}
return false;
}
}
4. Static tool method
HashMap Four static tool methods for announcements are provided in , Namely hash、comparableClassFor、compareComparables、tableSizeFor.
- hash:hash Disturbance function , Used to calculate key Of hash value , There was a disturbance , In order to reduce hash The probability of collision .
// Disturbance function , obtain key Of hash value
// Compared with JDK7 The four displacements of have been optimized
// just 1 The secondary displacement can be achieved , The same principle
static final int hash(Object key) {
int h;
// If key Not empty
// Then it's right key Of hash Do it once. 16 Bit unsigned right displacement exclusive or mixed then return
// The purpose of such a disturbance is to reduce hash The probability of collision
// Please move to the next step :https://www.zhihu.com/question/20733617/answer/111577937
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
- comparableClassFor: Used to check whether an object is comparable , stay HashMap Zhongduo is used for key The inspection of . Among them the String There was a special verdict ,String Class implements the Comparable class , And rewritten Object Of hashCode() and equals() Method , So I usually suggest that you use String Class as key.
// Used to check whether an object is comparable , stay HashMap Zhongduo is used for key The inspection of
static Class<?> comparableClassFor(Object x) {
// Judge whether it is Comparable Type of , If not, the object is not comparable
if (x instanceof Comparable) {
Class<?> c; Type[] ts, as; Type t; ParameterizedType p;
// Yes String Special judgment of type , If it is String Type directly returns the Class class
// So most people recommend using HashMap when key Use String type
if ((c = x.getClass()) == String.class) // bypass checks
return c;
// Get the set of interfaces implemented by this class , Contains generic parameter information
// The premise is to ensure that it implements certain interfaces before it can be implemented Comparable
if ((ts = c.getGenericInterfaces()) != null) {
// Loop through the interfaces it implements
for (int i = 0; i < ts.length; ++i) {
// Determine if it supports generics
if (((t = ts[i]) instanceof ParameterizedType) &&
// Determine whether the object carrying the generic information is Comparable class
((p = (ParameterizedType)t).getRawType() ==
Comparable.class) &&
// Get its actual generic list , And there is only one generic class , namely c
// c It's the incoming object x The type of
(as = p.getActualTypeArguments()) != null &&
as.length == 1 && as[0] == c) // type arg is c
return c;
}
}
}
return null;
}
- compareComparables: Compare k and x, If x and k It’s not the same type 0, If it is of the same type, it will be returned compareTo Get the value of the
// Compare k and x, If x and k It's not the same type 0
// If it is of the same type, it will be returned compareTo Get the value of the
@SuppressWarnings({
"rawtypes","unchecked"}) // for cast to Comparable
static int compareComparables(Class<?> kc, Object k, Object x) {
return (x == null || x.getClass() != kc ? 0 :
((Comparable)k).compareTo(x));
}
- tableSizeFor: According to the expected capacity cap, Calculation 2 Of n The actual capacity of the hash bucket in the form of the power .
// According to the expected capacity cap, return 2 Of n The actual capacity of the hash bucket in the form of the power length.
// The return value is usually >=cap
static final int tableSizeFor(int cap) {
// Go through the following or And displacement operation , n In the end, everyone is 1.
int n = cap - 1;
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
// Judge n Is it across the border? , return 2 Of n The second party acts as table( Hash bucket ) The threshold of
return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}
5. Attribute variables
HashMap Six attribute variables are defined in , For building and managing hash surface
// Hash surface , It's a Node An array of types , Every array element is a bucket
// Initialize when first used , At the same time, when expanding the capacity, it will perform array migration and other operations
transient Node<K,V>[] table;
// cache Node Set of nodes , Used to record used key and value Set
transient Set<Map.Entry<K,V>> entrySet;
/**
* The number of key-value mappings contained in this map.
*/
// What already exists key-value For the number of
transient int size;
// Structure modification count ,rehash It will record
// because hashmap Threads are also not safe , So save modCount. be used for fail-fast Strategy
transient int modCount;
// Adjust the next size value of the capacity , Its value is equal to Capacity * Load factor
int threshold;
// hash Load factor of table , Threshold used to calculate the number of hash table elements .
// threshold = Hash bucket .length * loadFactor;
final float loadFactor;
6. Constructors
// Constructor passed in initialization capacity and load factor
public HashMap(int initialCapacity, float loadFactor) {
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal initial capacity: " +
initialCapacity);
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal load factor: " +
loadFactor);
this.loadFactor = loadFactor;
// Calculate the actual capacity based on the expected capacity
// The actual capacity size is always greater than or equal to the minimum of the expected capacity 2 Power of power
// For example, the incoming initialization capacity is 7, The calculated actual capacity is 8,8 yes 2 Of 3 Power
this.threshold = tableSizeFor(initialCapacity);
}
public HashMap(int initialCapacity) {
this(initialCapacity, DEFAULT_LOAD_FACTOR);
}
// No arguments structure , The default initialization capacity is 16, The load factor is 0.75
public HashMap() {
this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
}
// Construct a new HashMap, At the same time, I will introduce Map m All elements of are added to the table
public HashMap(Map<? extends K, ? extends V> m) {
this.loadFactor = DEFAULT_LOAD_FACTOR;
putMapEntries(m, false);
}
The method of injecting elements in batch putMapEntries as follows :
// Put another Map All elements of are added to the table
// Parameters evict When initialized false
final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {
// obtain m Number of elements
int s = m.size();
// The number is larger than 0 To operate
if (s > 0) {
// If at this time hash Table not initialized
if (table == null) {
// pre-size
// according to m Medium size calculation and load factor , Calculate the threshold
float ft = ((float)s / loadFactor) + 1.0F;
// Fix the boundary of the threshold No more than MAXIMUM_CAPACITY
int t = ((ft < (float)MAXIMUM_CAPACITY) ?
(int)ft : MAXIMUM_CAPACITY);
// If the new threshold is greater than the current threshold
// Then return the satisfaction of a new threshold greater than or equal to 2 Of n The threshold of the power
if (t > threshold)
threshold = tableSizeFor(t);
}
// If the form is not empty , also m The number of elements is greater than the current capacity threshold
// Is to resize Capacity expansion
else if (s > threshold)
resize();
// Traverse m Add elements to the current table in turn
for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {
K key = e.getKey();
V value = e.getValue();
putVal(hash(key), key, value, false, evict);
}
}
}
7. A few conventional methods ( Incomplete )
// Frequently used , obtain hash The number of key value pairs that already exist in the table
// Notice the size It's not hash The size of the table , But the actual number of key value pairs
public int size() {
return size;
}
// Is it empty , That is, whether there is a key value pair ( And table Capacity has nothing to do with )
public boolean isEmpty() {
return size == 0;
}
// Detect the presence of key
// Logic and get similar , Mainly called getNode Method
public boolean containsKey(Object key) {
return getNode(hash(key), key) != null;
}
// Detect the presence of value
public boolean containsValue(Object value) {
Node<K,V>[] tab; V v;
// Traverse every linked list on the hash bucket
if ((tab = table) != null && size > 0) {
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next) {
// If you find value Consistent return true
if ((v = e.value) == value ||
(value != null && value.equals(v)))
return true;
}
}
}
return false;
}
// Batch store one Map, The logic is the same as in the constructor , Main call putMapEntries
public void putAll(Map<? extends K, ? extends V> m) {
putMapEntries(m, true);
}
8. Query methods get()
according to key Inquire about , Found return value, No return found null
// according to key Get value
public V get(Object key) {
Node<K,V> e;
// according to key After the disturbance key Of hash It's worth getting Node node , Then get the value
return (e = getNode(hash(key), key)) == null ? null : e.value;
}
// JDK8 New method , If it is found, it will return value, If not, return the default value
public V getOrDefault(Object key, V defaultValue) {
Node<K,V> e;
return (e = getNode(hash(key), key)) == null ? defaultValue : e.value;
}
// According to the disturbed hash Values and key The value of get node
final Node<K,V> getNode(int hash, Object key) {
Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
// Basic logic : First find the corresponding node , Then return , Returns if none exists null
// table Is not empty and its size is greater than 0 Just go on
if ((tab = table) != null && (n = tab.length) > 0 &&
// hash and n-1 After the area is carried out, position the barrel , Then get its head node first
(first = tab[(n - 1) & hash]) != null) {
// If the header node happens to be this node, it directly returns
// Test content : Head of the node hash Are they the same? ,key Are they the same? ( Check memory address or check value )
if (first.hash == hash && // always check first node
((k = first.key) == key || (key != null && key.equals(k))))
return first;
// The header node is not the node to be found , Next, get the next node to look for
if ((e = first.next) != null) {
// If the data structure in the bucket is a red black tree , So call getTreeNode Method to find
if (first instanceof TreeNode)
return ((TreeNode<K,V>)first).getTreeNode(hash, key);
// If it's not a red-black tree , It's connected with a watch , Then loop through , Until the node is found
do {
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
return e;
} while ((e = e.next) != null);
}
}
return null;
}
9. Insert / Update method put()
Insert or update a value into the table , The logic is as follows :
- Check hash Is the table initialized , If not, proceed resize Capacity expansion
- according to key The disturbance of hash Value to position the barrel , If the barrel is empty , Create a new Node Put it in the bucket
- If the bucket is not empty , Then it happened hash Collision , Then proceed to the next step :
- If the data structure in the bucket is a red black tree , In the form of a red black tree . If key If it doesn’t exist, insert it directly , If key Get to the node first
- If the data structure in the bucket is a linked list , Then loop through in the form of a linked list . If you traverse to the tail node, it is still the same key There is , And just insert , And detect whether the threshold value is exceeded , Decide if you need to tree ; If key Already exist , Get the node first
- If coverage is allowed , Then I will find the key Overwrite the corresponding node value , Otherwise, do nothing
- Modify operation count 、 Number of key value pairs and other information , And check whether expansion is needed , If necessary, do resize
// Insert new value , Main call putVal Method , For detailed logic, see putVal()
public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);
}
// Insert new value core function
// If parameters onlyIfAbsent yes true, Then it won't cover the same key Value value
// If evict yes false, Indicates that it is called at initialization time
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
Node<K,V>[] tab; Node<K,V> p; int n, i;
// First check table Whether it is null And whether the capacity is greater than 0, That is, there is no initialization table
// If it's not initialized resize Capacity expansion
if ((tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).length;
// First position the barrel ,p Is the head node of the bucket
// If p by null It means that there is no node in the bucket , Directly store the new key value pair in the barrel
if ((p = tab[i = (n - 1) & hash]) == null)
tab[i] = newNode(hash, key, value, null);
// Head node in barrel p Not empty , That's what happened hash Collision , Further treatment
else {
Node<K,V> e; K k;
// Compare head node perturbations hash Value and key value
// If it is the same, the stored node key Already exists , And it's the head node
// Get the node first , Whether to override its value for further processing
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
e = p;
// Head of the node key And insert key inequality
// First, determine whether the data structure in the bucket is a red black tree , If so, insert it into the tree in the form of a red black tree
else if (p instanceof TreeNode)
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
// The nodes in the barrel are not red black trees , That is, the structure of the list
else {
// Loop through the list
// Until you find and insert a node key Same node , If you can't find it, just insert it into the end node
for (int binCount = 0; ; ++binCount) {
// We've traversed to the tail node , State the inserted key non-existent , Insert directly into the tail
if ((e = p.next) == null) {
p.next = newNode(hash, key, value, null);
// If the number of nodes in the bucket reaches the treelization threshold , Then we will do tree modeling
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
treeifyBin(tab, hash);
break;
}
// Inserted key Already exist , Get the node first , Whether to override its value for further processing
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
break;
p = e;
}
}
// If the acquired node is not null Then operate
if (e != null) {
// existing mapping for key
V oldValue = e.value;
// Method passed in onlyIfAbsent Parameter is false, Or the old value is null Replace the old value directly
if (!onlyIfAbsent || oldValue == null)
e.value = value;
// This is a null implementation function , Used as a LinkedHashMap Rewrite using
afterNodeAccess(e);
return oldValue;
}
}
// The above operations and all completed , And has successfully inserted or changed the value of a node
// modify modCount Value , Record the number of changes
++modCount;
// to update size, And judge if the threshold value is exceeded, then expand the capacity
if (++size > threshold)
resize();
// This is a null implementation function , Used as a LinkedHashMap Rewrite using
afterNodeInsertion(evict);
return null;
}
10. The treelization of barrels treeifyBin()
If the number of elements in a bucket exceeds TREEIFY_THRESHOLD( The default is 8 ), Use the red black tree instead of the linked list , Improve query efficiency
// Replace all linked list nodes in the bucket with red black tree nodes
final void treeifyBin(Node[] tab, int hash) {
int n, index; Node e;
// If the current hash table is empty , Or the number of elements in the hash table is less than the threshold value for tree formation ( The default is 64), Just go to the new building / Capacity expansion
if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
resize();
else if ((e = tab[index = (n - 1) & hash]) != null) {
// If the number of elements in the hash table exceeds the tree threshold , To tree
// e Hash table refers to the chain table node in the bucket , From the first
TreeNode hd = null, tl = null; // The head of the red and black tree 、 Tail node
do {
// Create a new tree node , Content and current linked list nodes e Agreement
TreeNode p = replacementTreeNode(e, null);
if (tl == null) // Determine the tree head node
hd = p;
else {
p.prev = tl;
tl.next = p;
}
tl = p;
} while ((e = e.next) != null);
// Let the first element of the bucket point to the new red and black tree head node , In the future, the element in this bucket is the red black tree instead of the chain list
if ((tab[index] = hd) != null)
hd.treeify(tab);
}
}
11. Capacity expansion resize()
resize yes It’s very important A function of , It’s responsible for HashMap The core logic of dynamic capacity expansion in , Its main logic is as follows :
- Back up old tables 、 Old meter capacity 、 Old table threshold , Define the capacity of the new table 、 threshold
- If the capacity of the old meter is greater than 0
- If the old table capacity has reached the upper limit , Then set the threshold to the maximum integer , No more expansion
- If the capacity of the old table does not reach the upper limit , Set the new table capacity to the old table capacity 2 times , But the premise is that the new table capacity must also be within the upper limit
- If the old table capacity is empty , But the threshold is greater than 0, Description capacity and threshold specified during initialization , The threshold of the old table is the capacity of the new table
- If the old table capacity is empty , And the threshold is 0, Description no capacity and threshold specified during initialization , Then the default initial capacity and threshold are taken as the capacity and threshold of the new table
- If the threshold value of the new table after the above operation is 0, According to the capacity and load factor of the new table, the threshold value of the new table
- Create a new table , Its array length is the capacity of the new table
- If the old table is not empty , Data migration , Traverse each bucket in turn when migrating
- If there is only one node in the bucket , Put it directly into the bucket corresponding to the new table
- If there is more than one node in the bucket , And the structure is a red black tree , Then split the red black tree and migrate
- If there is more than one node in the bucket , And the structure is a linked list , It can be divided into high-level and low-level migration ( High position = Low position + Original hash bucket capacity ), Put the low order in the index of the new table corresponding to the old table barrel , Put the high bit in the new table corresponding to the new bucket index
- Go back to the new table
// hash Expand the core function
final Node<K,V>[] resize() {
// Save an old one first table
Node<K,V>[] oldTab = table;
// used table The capacity of
int oldCap = (oldTab == null) ? 0 : oldTab.length;
// used table The threshold of
int oldThr = threshold;
// New definition table The capacity and threshold of
int newCap, newThr = 0;
// If it's old table The capacity is greater than 0
if (oldCap > 0) {
// used table The capacity has reached the limit , Then set the threshold to the maximum integer , No more expansion
if (oldCap >= MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return oldTab;
} // Capacity not up to limit , new table Capacity is old table Of 2 times ( If it's within the upper limit )
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
oldCap >= DEFAULT_INITIAL_CAPACITY)
newThr = oldThr << 1; // double threshold
}// According to empty , But the threshold is greater than 0, Description capacity specified during initialization 、 threshold
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;// Then take the old threshold as the new one table The capacity of
else {
// There is neither initialization capacity nor initialization threshold , Then initialize
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
if (newThr == 0) {
float ft = (float)newCap * loadFactor;// According to the new table Capacity and load factor to find a new threshold
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
(int)ft : Integer.MAX_VALUE);// Make a boundary crossing
}
// Update threshold
threshold = newThr;
// Create a new table
@SuppressWarnings({
"rawtypes","unchecked"})
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
// Put the new table Directly assign to table, The original storage value of table Memory is being oldTab The variable points to
table = newTab;
// If it's old table Not empty , Then we will do node migration
if (oldTab != null) {
for (int j = 0; j < oldCap; ++j) {
Node<K,V> e;
// Get the old in turn table The first node in the middle bucket
if ((e = oldTab[j]) != null) {
oldTab[j] = null;// Clean up the memory space of the bucket in the old table , Prevent memory leaks
if (e.next == null)// If there is only one node in the bucket , Directly deposit new table in
newTab[e.hash & (newCap - 1)] = e;
else if (e instanceof TreeNode)// There is more than one node in the bucket , And the structure is a red black tree , Then split
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
else {
// preserve order
// Because the expansion is double the capacity , So every node on the original list
// Now it may be stored in the original subscript , I.e. low position ,
// Or the subscript after expansion , That's high .
// High position = Low position + Original hash bucket capacity
// The head node of the low link list 、 Tail node
Node<K,V> loHead = null, loTail = null;
// The head node of a high-level list 、 Tail node
Node<K,V> hiHead = null, hiTail = null;
Node<K,V> next;
do {
next = e.next;
// Use hash value and old capacity to get and , We can get the hash value after de modeling , Is greater than or equal to oldCap Still less than oldCap
// be equal to 0 Represents less than oldCap, It should be kept low , Otherwise, store in a high place
if ((e.hash & oldCap) == 0) {
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}// High level is opposite to low level
else {
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
// The lower list is stored in the original bucket index
if (loTail != null) {
loTail.next = null;
newTab[j] = loHead;
}
// The high-level list is stored in the new bucket index
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
12. Delete node remove()
The deletion is based on key First find the corresponding Node node , And then delete it , If you don’t find it, go straight back null, Its operation and get() Very similar
// according to key Delete a node , Its main function is to call removeNode Method
public V remove(Object key) {
Node<K,V> e;
return (e = removeNode(hash(key), key, null, false, true)) == null ?
null : e.value;
}
// The core method of deleting nodes
// If parameters matchValue yes true, You have to key、value All equal before deleting .
// If movable Parameter is false, When deleting a node , Don't move other nodes
final Node<K,V> removeNode(int hash, Object key, Object value,
boolean matchValue, boolean movable) {
Node<K,V>[] tab; Node<K,V> p; int n, index;
// Make sure the table is empty before deleting , And its capacity is greater than 0
// Simultaneous basis key The barrel is not empty when positioned in the barrel position
if ((tab = table) != null && (n = tab.length) > 0 &&
(p = tab[index = (n - 1) & hash]) != null) {
Node<K,V> node = null, e; K k; V v;
// If the head node is the node to be deleted , I'm going to assign it directly to node
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
node = p;
// If there are still subsequent nodes, continue to look for the node to be deleted
else if ((e = p.next) != null) {
// If the data structure in the bucket is a red black tree , Then find the node in the red black tree
if (p instanceof TreeNode)
node = ((TreeNode<K,V>)p).getTreeNode(hash, key);
else {
// If it's a linked list , Then loop through the search
// Note that this time p It is to delete the precursor node of the node ,node Is the deleted node
do {
if (e.hash == hash &&
((k = e.key) == key ||
(key != null && key.equals(k)))) {
node = e;
break;
}
p = e;
} while ((e = e.next) != null);
}
}
// If the node to be deleted is found , Just delete it , Otherwise return to null
// matchValue yes true Requirements key and value All must be equal
if (node != null && (!matchValue || (v = node.value) == value ||
(value != null && value.equals(v)))) {
// Delete corresponding nodes according to different data structures
if (node instanceof TreeNode)
((TreeNode<K,V>)node).removeTreeNode(this, tab, movable);
else if (node == p)
tab[index] = node.next;
else
p.next = node.next;
++modCount;// Record the number of changes
--size;// Number of key-value pairs -1
afterNodeRemoval(node);// This is a null implementation function ,LinkedHashMap Callback function
return node;
}
}
return null;
}
summary
HashMap The design of is really excellent , Including arrays + Linked list + The combination of red and black trees 、 Use a lot of bit operations to improve efficiency 、hash Disturbance reduces collisions, etc . however HashMap The biggest drawback is not self-supporting multithreading , Its non thread security makes it unable to perform well in concurrent environment , There will even be fatal situations ( such as resize Cause a deadlock ).HashTable Although it’s thread safe , But its concurrent performance is not very good , and ConcurrentHashMap It makes up for the short board , It performs very well in concurrent environments , I’ll also write an article in the future JDK8 Of ConcurrentHashMap Source interpretation article , Let’s learn about it together HashMap What are the optimizations based on .