/** * Hash table based implementation of the <tt>Map</tt> interface. This * implementation provides all of the optional map operations, and permits * <tt>null</tt> values and the <tt>null</tt> key. (The <tt>HashMap</tt> * class is roughly equivalent to <tt>Hashtable</tt>, except that it is * unsynchronized and permits nulls.) This class makes no guarantees as to * the order of the map; in particular, it does not guarantee that the order * will remain constant over time. * 基于哈希表的Map接口的实现。此实现提供所有可选的map操作,并[允许null值和null键]。 * (HashMap类与Hashtable大致等效,不同之处在于它是不同步的,并且允许为null。) * 此类不保证map元素的顺序。特别是,它不能保证顺序会随着时间的推移保持恒定。 * * <p>This implementation provides constant-time performance for the basic * operations (<tt>get</tt> and <tt>put</tt>), assuming the hash function * disperses the elements properly among the buckets. Iteration over * collection views requires time proportional to the "capacity" of the * <tt>HashMap</tt> instance (the number of buckets) plus its size (the number * of key-value mappings). Thus, it's very important not to set the initial * capacity too high (or the load factor too low) if iteration performance is * important. * 假设哈希函数将元素正确的分散在存储桶中,则此实现为基本操作(get和put)提供恒定时间的性能。 * 集合视图迭代所需的时间与HashMap实例的“容量”(存储桶数)以及其大小(键-值映射数)成正比。 * 因此,如果迭代性能很重要,则不要将初始容量设置得过高(或负载因子过低),这一点非常重要。 * * <p>An instance of <tt>HashMap</tt> has two parameters that affect its * performance: <i>initial capacity</i> and <i>load factor</i>. The * <i>capacity</i> is the number of buckets in the hash table, and the initial * capacity is simply the capacity at the time the hash table is created. The * <i>load factor</i> is a measure of how full the hash table is allowed to * get before its capacity is automatically increased. When the number of * entries in the hash table exceeds the product of the load factor and the * current capacity, the hash table is <i>rehashed</i> (that is, internal data * structures are rebuilt) so that the hash table has approximately twice the * number of buckets. * HashMap的实例具有两个影响其性能的参数:初始容量和负载因子。 * 容量是哈希表中能存储的桶的数量,初始容量只是创建哈希表时的容量。 * 负载因子是哈希表的容量自动增加之前允许其填充的填满程度的度量。 * 当哈希表中的条目数超过负载因子和当前容量的乘积时,哈希表将被重新哈希(即,内部数据结构将被重建),因此哈希表的容量大约为桶数的两倍。 * * <p>As a general rule, the default load factor (.75) offers a good * tradeoff between time and space costs. Higher values decrease the * space overhead but increase the lookup cost (reflected in most of * the operations of the <tt>HashMap</tt> class, including * <tt>get</tt> and <tt>put</tt>). The expected number of entries in * the map and its load factor should be taken into account when * setting its initial capacity, so as to minimize the number of * rehash operations. If the initial capacity is greater than the * maximum number of entries divided by the load factor, no rehash * operations will ever occur. * 通常,默认负载因子(0.75)在时间和空间成本之间提供了很好的折中。 * 较高的值会减少空间开销,但会增加查找成本(在HashMap类的大多数操作中都得到体现,包括get和put)。 * 设置其初始容量时,应考虑映射中的预期条目数及其负载因子,以最大程度地减少重新哈希操作的次数。 * 如果初始容量大于最大条目数除以负载因子,则将不会进行任何哈希操作。 * (即 初始容量 * 负载因子 > 最大条目数),这里最大条目数指预期要存储的最大桶子数 * * <p>If many mappings are to be stored in a <tt>HashMap</tt> * instance, creating it with a sufficiently large capacity will allow * the mappings to be stored more efficiently than letting it perform * automatic rehashing as needed to grow the table. Note that using * many keys with the same {@code hashCode()} is a sure way to slow * down performance of any hash table. To ameliorate impact, when keys * are {@link Comparable}, this class may use comparison order among * keys to help break ties. * 如果将许多映射存储在HashMap实例中,则创建具有足够大容量的映射将比让其根据需要通过自动重新哈希处理来增长表会更有效地存储映射。 * 请注意,使用许多具有相同hashCode()值的键是降低任何哈希表性能的肯定方法。 * 为了改善影响,当键为Comparable类型时,此类可以使用键之间的比较顺序来帮助打破关系。 * * <p><strong>Note that this implementation is not synchronized.</strong> * If multiple threads access a hash map concurrently, and at least one of * the threads modifies the map structurally, it <i>must</i> be * synchronized externally. (A structural modification is any operation * that adds or deletes one or more mappings; merely changing the value * associated with a key that an instance already contains is not a * structural modification.) This is typically accomplished by * synchronizing on some object that naturally encapsulates the map. * 请注意,此实现未同步。 如果多个线程同时访问哈希映射,并且至少一个线程在结构上修改了该映射,则必须在外部进行同步。 * (结构修改是添加或删除一个或多个映射的任何操作;仅更改与实例已经包含的键相关联的值不是结构修改。) * 通常可以通过在封装了map的某个对象上进行同步来实现。 * * If no such object exists, the map should be "wrapped" using the * {@link Collections#synchronizedMap Collections.synchronizedMap} * method. This is best done at creation time, to prevent accidental * unsynchronized access to the map:<pre> * Map m = Collections.synchronizedMap(new HashMap(...));</pre> * 如果不存在这样的对象,则应使用Collections.synchronizedMap方法“包装”map。 * 最好在创建时完成此操作,以防止意外不同步地访问map: * Map m = Collections.synchronizedMap(new HashMap(...)); * * <p>The iterators returned by all of this class's "collection view methods" * are <i>fail-fast</i>: if the map is structurally modified at any time after * the iterator is created, in any way except through the iterator's own * <tt>remove</tt> method, the iterator will throw a * {@link ConcurrentModificationException}. Thus, in the face of concurrent * modification, the iterator fails quickly and cleanly, rather than risking * arbitrary, non-deterministic behavior at an undetermined time in the * future. * 此类的所有“集合视图方法”返回的迭代器都是“fail-fast”(快速失败机制)的:如果在创建迭代器后的任何时间以任何方式对映射进行结构修改, * 则除了通过迭代器自己的remove方法之外,迭代器都会抛出ConcurrentModificationException。 * 因此,面对并发修改,迭代器将快速而干净地失败,而不是冒着在未来不确定的时间冒任何不确定行为的风险。 * * <p>Note that the fail-fast behavior of an iterator cannot be guaranteed * as it is, generally speaking, impossible to make any hard guarantees in the * presence of unsynchronized concurrent modification. Fail-fast iterators * throw <tt>ConcurrentModificationException</tt> on a best-effort basis. * Therefore, it would be wrong to write a program that depended on this * exception for its correctness: <i>the fail-fast behavior of iterators * should be used only to detect bugs.</i> * 注意,不能保证迭代器的快速失败行为,因为通常来说,在存在不同步的并发修改的情况下,不可能做出任何严格的保证。 * 快速失败的迭代器会尽最大努力抛出ConcurrentModificationException。因此,编写依赖于此异常的程序的正确性是错误的:迭代器的快速失败行为应仅用于检测错误。 * * <p>This class is a member of the * <a href="{@docRoot}/../technotes/guides/collections/index.html"> * Java Collections Framework</a>. * 此类是Java Collections Framework的成员。 * * @param <K> the type of keys maintained by this map | 这是map维护的键的类型 * @param <V> the type of mapped values | 这是映射的值的类型 * * @author Doug Lea * @author Josh Bloch * @author Arthur van Hoff * @author Neal Gafter * @see Object#hashCode() * @see Collection * @see Map * @see TreeMap * @see Hashtable * @since 1.2 */
/* * Implementation notes. * * This map usually acts as a binned (bucketed) hash table, but * when bins get too large, they are transformed into bins of * TreeNodes, each structured similarly to those in * java.util.TreeMap. Most methods try to use normal bins, but * relay to TreeNode methods when applicable (simply by checking * instanceof a node). Bins of TreeNodes may be traversed and * used like any others, but additionally support faster lookup * when overpopulated. However, since the vast majority of bins in * normal use are not overpopulated, checking for existence of * tree bins may be delayed in the course of table methods. * 该map通常用作装箱(存储桶)的哈希表,但是当桶子太大时,它们将转换为TreeNodes的树形结构的桶,每个桶的结构与java.util.TreeMap中的相似。 * 大多数方法尝试使用普通的箱,但是在适用时转接到TreeNode方法(只需通过检查 instanceof node)。 * TreeNodes的树形桶可以像其他任何遍历一样使用,而且过多时还支持更快的查找。 * 但是,由于正常使用中的绝大多数时候存储桶都没有Node过多,因此在使用表格方法的过程中,可能会延迟检查是否存在树型桶。 * * Tree bins (i.e., bins whose elements are all TreeNodes) are * ordered primarily by hashCode, but in the case of ties, if two * elements are of the same "class C implements Comparable<C>", * type then their compareTo method is used for ordering. (We * conservatively check generic types via reflection to validate * this -- see method comparableClassFor). The added complexity * of tree bins is worthwhile in providing worst-case O(log n) * operations when keys either have distinct hashes or are * orderable, Thus, performance degrades gracefully under * accidental or malicious usages in which hashCode() methods * return values that are poorly distributed, as well as those in * which many keys share a hashCode, so long as they are also * Comparable. (If neither of these apply, we may waste about a * factor of two in time and space compared to taking no * precautions. But the only known cases stem from poor user * programming practices that are already so slow that this makes * little difference.) * 树箱(即,元素均为TreeNode的箱-树形桶)主要由hashCode排序,但在有联系的情况下,如果两个元素属于相同的 "class C implements Comparable<C>" 类型, * 则可以通过他们的compareTo方法进行排序。(我们通过反射保守地检查泛型类型以验证这一点 -- 看 comparableClassFor 方法) * 当键具有不同的哈希值或可排序时,在最坏的情况 O(log n) 操作下增加树箱的复杂性是值得的,因此,在意外或恶意使用中 hashCode() 方法返回分布不均的值的情况下,性能会优雅降低, * 许多键共享一个hashCode值时也是一样,只要它们也是可比较的。(如果这两种方法都不适用,那么与不采取预防措施相比,我们可能会在时间和空间上浪费大约两倍的时间。 * 但是,唯一已知的情况是由于不良的用户编程实践已经如此之慢,以至于几乎没有什么区别。) * * Because TreeNodes are about twice the size of regular nodes, we * use them only when bins contain enough nodes to warrant use * (see TREEIFY_THRESHOLD). And when they become too small (due to * removal or resizing) they are converted back to plain bins. In * usages with well-distributed user hashCodes, tree bins are * rarely used. Ideally, under random hashCodes, the frequency of * nodes in bins follows a Poisson distribution * (http://en.wikipedia.org/wiki/Poisson_distribution) with a * parameter of about 0.5 on average for the default resizing * threshold of 0.75, although with a large variance because of * resizing granularity. Ignoring variance, the expected * occurrences of list size k are (exp(-0.5) * pow(0.5, k) / * factorial(k)). The first values are: * * 0: 0.60653066 * 1: 0.30326533 * 2: 0.07581633 * 3: 0.01263606 * 4: 0.00157952 * 5: 0.00015795 * 6: 0.00001316 * 7: 0.00000094 * 8: 0.00000006 * more: less than 1 in ten million * 由于TreeNode的大小约为常规Node的两倍,因此仅在存储桶包含足够多的Node时,我们才使用它们(TreeNode)(请参阅TREEIFY_THRESHOLD)。 * 当它们变得太小(由于移除或调整大小)时,它们会转换回普通箱(普通箱放普通Node节点元素,树箱放TreeNode节点元素)。 * 在用户hashCode具有良好分布的用法中,很少使用树箱。理想情况下,在随机hashCodes下,箱中节点的频率遵循Poisson分布(http://en.wikipedia.org/wiki/Poisson_distribution), * 其默认调整大小阈值为0.75,平均参数约为0.5,尽管 由于调整粒度的差异很大。 忽略方差,列表大小k的预期出现次数是(exp(-0.5)* pow(0.5,k)/ factorial(k))。 * The first values are: * * 0: 0.60653066 * 1: 0.30326533 * 2: 0.07581633 * 3: 0.01263606 * 4: 0.00157952 * 5: 0.00015795 * 6: 0.00001316 * 7: 0.00000094 * 8: 0.00000006 * more:小于一千万分之一 * * The root of a tree bin is normally its first node. However, * sometimes (currently only upon Iterator.remove), the root might * be elsewhere, but can be recovered following parent links * (method TreeNode.root()). * 树的根通常是它的第一个节点。但是,有时(当前仅在Iterator.remove上),根目录可能在其他位置,但是可以在父链接之后恢复(方法TreeNode.root())。 * * All applicable internal methods accept a hash code as an * argument (as normally supplied from a public method), allowing * them to call each other without recomputing user hashCodes. * Most internal methods also accept a "tab" argument, that is * normally the current table, but may be a new or old one when * resizing or converting. * 所有适用的内部方法均接收哈希码作为参数(通常由公共方法提供),从而允许它们在不重新计算用户hashCode的情况下彼此调用。 * 大多数内部方法还接受“tab”参数,该参数通常是当前表,但在调整大小或转换时可以是新的或旧的。 * * When bin lists are treeified, split, or untreeified, we keep * them in the same relative access/traversal order (i.e., field * Node.next) to better preserve locality, and to slightly * simplify handling of splits and traversals that invoke * iterator.remove. When using comparators on insertion, to keep a * total ordering (or as close as is required here) across * rebalancings, we compare classes and identityHashCodes as * tie-breakers. * 当桶被树化,拆分或去树化时,在访问/遍历的顺序(即字段Node.next)中我们将它们保持相同的关系,以更好地保留局部性,并略微简化对调用iterator.remove的拆分和遍历的处理。 * 在使用比较器插入时,为了保持重新平衡的总体顺序(或此处要求的接近度),我们将classes和identityHashCodes进行比较作为顺序判定 * * The use and transitions among plain vs tree modes is * complicated by the existence of subclass LinkedHashMap. See * below for hook methods defined to be invoked upon insertion, * removal and access that allow LinkedHashMap internals to * otherwise remain independent of these mechanics. (This also * requires that a map instance be passed to some utility methods * that may create new nodes.) * 子类LinkedHashMap的存在使普通模式与树模式之间的使用和转换变得复杂。 * 请参见下面的钩子方法,这些钩子方法定义为在插入,删除和访问时被调用,这些方法允许LinkedHashMap内部保持独立于这些机制。 * (这还要求将map实例传递给一些可能创建新节点的工具型方法。) * * The concurrent-programming-like SSA-based coding style helps * avoid aliasing errors amid all of the twisty pointer operations. * 并发编程例如基于SSA的编码风格有助于避免所有偏移指针操作中的混叠错误。 * */
静态变量
>folded DEFAULT_INITIAL_CAPACITY 默认初始容量
1 2 3 4 5 6
/** * The default initial capacity - MUST be a power of two. * 默认初始容量 - 必须为2的幂。 * 1左移4位 = 2的4次幂 = 16 */ staticfinalint DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
>folded MAXIMUM_CAPACITY 最大容量
1 2 3 4 5 6 7 8
/** * The maximum capacity, used if a higher value is implicitly specified * by either of the constructors with arguments. * MUST be a power of two <= 1<<30. * 最大容量,如果任一构造函数使用参数隐式的指定了更高的值,则使用该容量。 * 容量必须为2的幂且 <= 1 << 30。 */ staticfinalint MAXIMUM_CAPACITY = 1 << 30;
/* * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms. * Oracle 专有/机密。 使用须遵守许可条款。 */ /* * Written by Doug Lea with assistance from members of JCP JSR-166 * Expert Group and released to the public domain, as explained at * http://creativecommons.org/publicdomain/zero/1.0/ * 由Doug Lea在JCP JSR-166专家组成员的协助下撰写,并已发布到公共领域, * 如http://creativecommons.org/publicdomain/zero/1.0/所述 */
/** * {@code Lock} implementations provide more extensive locking * operations than can be obtained using {@code synchronized} methods * and statements. They allow more flexible structuring, may have * quite different properties, and may support multiple associated * {@link Condition} objects. * * <p>A lock is a tool for controlling access to a shared resource by * multiple threads. Commonly, a lock provides exclusive access to a * shared resource: only one thread at a time can acquire the lock and * all access to the shared resource requires that the lock be * acquired first. However, some locks may allow concurrent access to * a shared resource, such as the read lock of a {@link ReadWriteLock}. * * <p>The use of {@code synchronized} methods or statements provides * access to the implicit monitor lock associated with every object, but * forces all lock acquisition and release to occur in a block-structured way: * when multiple locks are acquired they must be released in the opposite * order, and all locks must be released in the same lexical scope in which * they were acquired. * * <p>While the scoping mechanism for {@code synchronized} methods * and statements makes it much easier to program with monitor locks, * and helps avoid many common programming errors involving locks, * there are occasions where you need to work with locks in a more * flexible way. For example, some algorithms for traversing * concurrently accessed data structures require the use of * "hand-over-hand" or "chain locking": you * acquire the lock of node A, then node B, then release A and acquire * C, then release B and acquire D and so on. Implementations of the * {@code Lock} interface enable the use of such techniques by * allowing a lock to be acquired and released in different scopes, * and allowing multiple locks to be acquired and released in any * order. * * <p>With this increased flexibility comes additional * responsibility. The absence of block-structured locking removes the * automatic release of locks that occurs with {@code synchronized} * methods and statements. In most cases, the following idiom * should be used: * * <pre> {@code * Lock l = ...; * l.lock(); * try { * // access the resource protected by this lock * } finally { * l.unlock(); * }}</pre> * * When locking and unlocking occur in different scopes, care must be * taken to ensure that all code that is executed while the lock is * held is protected by try-finally or try-catch to ensure that the * lock is released when necessary. * * <p>{@code Lock} implementations provide additional functionality * over the use of {@code synchronized} methods and statements by * providing a non-blocking attempt to acquire a lock ({@link * #tryLock()}), an attempt to acquire the lock that can be * interrupted ({@link #lockInterruptibly}, and an attempt to acquire * the lock that can timeout ({@link #tryLock(long, TimeUnit)}). * * <p>A {@code Lock} class can also provide behavior and semantics * that is quite different from that of the implicit monitor lock, * such as guaranteed ordering, non-reentrant usage, or deadlock * detection. If an implementation provides such specialized semantics * then the implementation must document those semantics. * * <p>Note that {@code Lock} instances are just normal objects and can * themselves be used as the target in a {@code synchronized} statement. * Acquiring the * monitor lock of a {@code Lock} instance has no specified relationship * with invoking any of the {@link #lock} methods of that instance. * It is recommended that to avoid confusion you never use {@code Lock} * instances in this way, except within their own implementation. * * <p>Except where noted, passing a {@code null} value for any * parameter will result in a {@link NullPointerException} being * thrown. * * <h3>Memory Synchronization</h3> * * <p>All {@code Lock} implementations <em>must</em> enforce the same * memory synchronization semantics as provided by the built-in monitor * lock, as described in * <a href="https://docs.oracle.com/javase/specs/jls/se7/html/jls-17.html#jls-17.4"> * The Java Language Specification (17.4 Memory Model)</a>: * <ul> * <li>A successful {@code lock} operation has the same memory * synchronization effects as a successful <em>Lock</em> action. * <li>A successful {@code unlock} operation has the same * memory synchronization effects as a successful <em>Unlock</em> action. * </ul> * * Unsuccessful locking and unlocking operations, and reentrant * locking/unlocking operations, do not require any memory * synchronization effects. * * <h3>Implementation Considerations</h3> * * <p>The three forms of lock acquisition (interruptible, * non-interruptible, and timed) may differ in their performance * characteristics, ordering guarantees, or other implementation * qualities. Further, the ability to interrupt the <em>ongoing</em> * acquisition of a lock may not be available in a given {@code Lock} * class. Consequently, an implementation is not required to define * exactly the same guarantees or semantics for all three forms of * lock acquisition, nor is it required to support interruption of an * ongoing lock acquisition. An implementation is required to clearly * document the semantics and guarantees provided by each of the * locking methods. It must also obey the interruption semantics as * defined in this interface, to the extent that interruption of lock * acquisition is supported: which is either totally, or only on * method entry. * * <p>As interruption generally implies cancellation, and checks for * interruption are often infrequent, an implementation can favor responding * to an interrupt over normal method return. This is true even if it can be * shown that the interrupt occurred after another action may have unblocked * the thread. An implementation should document this behavior. * * @see ReentrantLock * @see Condition * @see ReadWriteLock * * @since 1.5 * @author Doug Lea */
/** * Acquires the lock. * 获取锁 * * <p>If the lock is not available then the current thread becomes * disabled for thread scheduling purposes and lies dormant until the * lock has been acquired. * 如果该锁不可用,则出于线程调度目的,当前线程将被禁用,并处于休眠状态,直到获得该锁为止。 * * <p><b>Implementation Considerations</b> * 实现注意事项 * * <p>A {@code Lock} implementation may be able to detect erroneous use * of the lock, such as an invocation that would cause deadlock, and * may throw an (unchecked) exception in such circumstances. The * circumstances and the exception type must be documented by that * {@code Lock} implementation. * 锁实现可能能够检测到锁的错误使用,例如可能导致死锁的调用,并且在这种情况下可能引发(未经检查的)异常。 * 该Lock实现必须记录情况和异常类型。 */ voidlock();
/** * Acquires the lock unless the current thread is * {@linkplain Thread#interrupt interrupted}. * 获取锁,除非当前线程被中断。 * * <p>Acquires the lock if it is available and returns immediately. * 获取锁(如果有)并立即返回。 * * <p>If the lock is not available then the current thread becomes * disabled for thread scheduling purposes and lies dormant until * one of two things happens: * * <ul> * <li>The lock is acquired by the current thread; or * <li>Some other thread {@linkplain Thread#interrupt interrupts} the * current thread, and interruption of lock acquisition is supported. * </ul> * * <p>If the current thread: * <ul> * <li>has its interrupted status set on entry to this method; or * <li>is {@linkplain Thread#interrupt interrupted} while acquiring the * lock, and interruption of lock acquisition is supported, * </ul> * then {@link InterruptedException} is thrown and the current thread's * interrupted status is cleared. * * <p><b>Implementation Considerations</b> * * <p>The ability to interrupt a lock acquisition in some * implementations may not be possible, and if possible may be an * expensive operation. The programmer should be aware that this * may be the case. An implementation should document when this is * the case. * * <p>An implementation can favor responding to an interrupt over * normal method return. * * <p>A {@code Lock} implementation may be able to detect * erroneous use of the lock, such as an invocation that would * cause deadlock, and may throw an (unchecked) exception in such * circumstances. The circumstances and the exception type must * be documented by that {@code Lock} implementation. * * @throws InterruptedException if the current thread is * interrupted while acquiring the lock (and interruption * of lock acquisition is supported) */ voidlockInterruptibly()throws InterruptedException;