Chapters
Note: Collections framework uses generics syntax. Many collections use generics. It's recommend that readers are knowledgable about Generics before reading this article.
Java has pre-defined classes that group object instances. This classes are similar to standard array. However, standard array is not optimized for some certain tasks. Collections framework, on the hand, has classes that are optimized to do certain tasks. Though, unlike standard array, Collections Framework classes only store object types.
First off, let's start with the Collection interface. Collection interface is the root interface in the collection hierarchy. A collection represents a group of objects, known as its elements. Some collections allow duplicate elements and others do not. Some are ordered and others unordered. Collection interface has three subinterfaces which are: Set, List, and Queue.
Set is a collection that contains no duplicate elements. More formally, sets contain no pair of elements e1 and e2 such that e1.equals(e2), and at most one null element. As implied by its name, this interface models the mathematical set abstraction.
Some set implementations have restrictions on the elements that they may contain. For example, some implementations prohibit null elements, and some have restrictions on the types of their elements.
List is an ordered collection (also known as a sequence). The user of this interface has precise control over where in the list each element is inserted. The user can access elements by their integer index (position in the list), and search for elements in the list.
Unlike sets, lists typically allow duplicate elements. More formally, lists typically allow pairs of elements e1 and e2 such that e1.equals(e2), and they typically allow multiple null elements if they allow null elements at all. It is not inconceivable that someone might wish to implement a list that prohibits duplicates, by throwing runtime exceptions when the user attempts to insert them, but this usage is expected to be rare.
Queue is a collection used to hold multiple elements prior to processing. Besides basic Collection operations, a Queue provides additional insertion, extraction, and inspection operations. Queues typically, but do not necessarily, order elements in a FIFO (first-in, first-out) manner. Among the exceptions are priority queues, which order elements according to a supplied comparator or the elements' natural ordering.
Whatever the ordering used, the head of the queue is the element that would be removed by a call to remove or poll. In a FIFO queue, all new elements are inserted at the tail of the queue. Other kinds of queues may use different placement rules. Every Queue implementation must specify its ordering properties.
Deque is a linear collection that supports element insertion and removal at both ends. The name deque is short for "double ended queue" and is usually pronounced "deck". Most Deque implementations place no fixed limits on the number of elements they may contain, but this interface supports capacity-restricted deques as well as those with no fixed size limit.
These interfaces are the core backbones of some concrete classes that we will discuss next.
In previous topic, we encountered the word "FIFO". In this topic, I'm going to briefly discuss FIFO(First-In-First-Out) and LIFO(Last-In-First-Out). FIFO is a process of handling collection of data where the first data comes in and comes out first. FIFO is analogous to people climbing up a ladder. The very first person that climbed up the ladder is the first to leave the upper end of the ladder.
LIFO is a process of handling collection of data where the last data is the first one to come out. LIFO is analogous to books that stack up on top of one another. The first book that we put on a stack is at the bottom and newly added books are always on top of the old ones. To get the book on the bottom of a stack, we practically pick off the book on top first until we reach the book on the bottom.
In later topics, you may encounter these mathematical functions: O(n), O(1), etc. These functions are called Big O notation. I will briefly explain Big O notation here, so you're not clueless if you encounter one of those functions.
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation.
In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows. We don't need to do computations to understand this concept. Although, you need to be knowledgable about math to follow through.
In this topic, we're going to discuss eight functions of Big O notation. So, what are functions. In your high school or college math, I assume you encountered this equation:
this equation:
has an input of 1:
In Big O notation, we will replace
Let's start with
In later chapters, you may encounter amortized constant time. Amortized also known as amortized analysis is a method for analyzing a given algorithm's complexity, or how much of a resource, especially time or memory, it takes to execute. The motivation for amortized analysis is, that looking at the worst-case run time can be too pessimistic.
Instead, amortized analysis averages the running times of operations in a sequence over that sequence. So, we could say that amortized constant time is a constant time with inconsistencies.
ArrayList is a Resizable-array implementation of the List interface. Implements all optional list operations, and permits all elements, including null. In addition to implementing the List interface, this class provides methods to manipulate the size of the array that is used internally to store the list. (This class is roughly equivalent to Vector, except that it is unsynchronized.)
The size, isEmpty, get, set, iterator, and listIterator operations run in constant time. The add operation runs in amortized constant time, that is, adding n elements requires O(n) time. All of the other operations run in linear time (roughly speaking). The constant factor is low compared to that for the LinkedList implementation.
Normally, add operation of ArrayList runs in
During resizing, arraylist will create a new block(or container) that has double the size of the old block and then arraylist puts the elements of the old block to the new block and then disposes the old block to free up memory.
This takes time because this operation has several sub-operations and add operation is just one of those sub-operations. That's why setting the ideal capacity of arraylist based on the need of your program may boost performance. By default, arraylist has an initial capacity of 10.
Generally, arraylist is ideally better at accessing elements in a random access manner. Since arraylist elements are in a block, tracking and accessing those elements are fairly easy because they are in a limited space.
This example demonstrates ArrayList.
LinkedList is a Doubly-linked list implementation of the List and Deque interfaces. Implements all optional list operations, and permits all elements (including null). This collection is unsynchronized.
All of the operations perform as could be expected for a doubly-linked list. Operations that index into the list will traverse the list from the beginning or the end, whichever is closer to the specified index.
Generally, linkedlist is ideally better at accessing elements in sequence or picking off elements at the beginning or end of a collection. Java linkedlist is a doubly-link list that relies on nodes instead of memory block. Elements can be everywhere in the memory space because each element is connected to one another. Each element has addresses of their previous and next elements.
LinkedList is not cache-friendly. Elements of Linkedlist are scattered throughout memory space. Thus, caching them could become problematic. LinkedList is fairly slow when randomly accessing elements due to the fact that it may need to traverse a number of nodes just to find the element that we're trying to access. LinkedList can start traversing from the first or last element.
This example demonstrates LinkedList.
ArrayDeque is a resizable-array implementation of the Deque interface. Array deques have no capacity restrictions; they grow as necessary to support usage. They are not thread-safe; in the absence of external synchronization, they do not support concurrent access by multiple threads. Null elements are prohibited.
This class is likely to be faster than Stack when used as a stack, and faster than LinkedList when used as a queue. Most ArrayDeque operations run in amortized constant time. Exceptions include remove, removeFirstOccurrence, removeLastOccurrence, contains, iterator.remove(), and the bulk operations, all of which run in linear time.
Many people compare ArrayDeque with LinkedList because these two are often used for picking off elements at the beginning or end of a collection. According to the number of articles that I have read on the internet, many people are in favor of ArrayDeque than LinkedList. Although, rigorous testing is still the best way to decide which collection algorithm to use.
This example demonstrates ArrayDeque.
The Vector class implements a growable array of objects. Like an array, it contains components that can be accessed using an integer index. However, the size of a Vector can grow or shrink as needed to accommodate adding and removing items after the Vector has been created.
Each vector tries to optimize storage management by maintaining a capacity and a capacityIncrement. The capacity is always at least as large as the vector size; it is usually larger because as components are added to the vector, the vector's storage increases in chunks the size of capacityIncrement.
An application can increase the capacity of a vector before inserting a large number of components; this reduces the amount of incremental reallocation. Vector is synchronized. If a thread-safe implementation is not needed, it is recommended to use ArrayList in place of Vector.
This example demonstrates Vector.
The Stack class represents a last-in-first-out (LIFO) stack of objects. It extends class Vector with five operations that allow a vector to be treated as a stack. The usual push and pop operations are provided, as well as a method to peek at the top item on the stack, a method to test for whether the stack is empty, and a method to search the stack for an item and discover how far it is from the top.
When a stack is first created, it contains no items. A more complete and consistent set of LIFO stack operations is provided by the Deque interface and its implementations, which should be used in preference to this class. This class is a legacy collection. You will mostly see this in old java applications. In modern java applications, ArrayDeque or other Deque implemantations are used.
This example demonstrates Stack.
PriorityQueue is an unbounded queue based on a priority heap. Unbounded queue is a queue that is not bounded by capacity. Although, PriorityQueue has internal capacity that automatically grows when it's needed.
The elements of the priority queue are ordered according to their natural ordering, or by a Comparator provided at queue construction time, depending on which constructor is used. A priority queue does not permit null elements.
A priority queue relying on natural ordering also does not permit insertion of non-comparable objects (doing so may result in ClassCastException). The head of this queue is the least element with respect to the specified ordering.
For example, we have this collection of Strings:
{"Zebra", "Dog", "Lion", "Cat"}
If our PriorityQueue uses default ordering which is natural ordering, PriorityQueue will prioritize the strings like this:
{"Cat", "Dog","Lion","Zebra"}
"Cat" is the first element which is the head of the queue. Thus, "Cat" is the least element, followed by "Dog" and so on. An element located at the head in a PriorityQueue is the first element to be retrieved.
If multiple elements are tied for least value, the head is one of those elements -- ties are broken arbitrarily. For example, Assuming our PriorityQueue is using natural ordering, if three String objects have the same value, they are going to be randomly ordered next to one another. The queue retrieval operations poll, remove, peek, and element access the element at the head of the queue.
this implementation provides
This example demonstrates PriorityQueue.
HashSet implements the Set interface, backed by a hash table (actually a HashMap instance). It makes no guarantees as to the iteration order of the set; in particular, it does not guarantee that the order will remain constant over time. This class permits the null element.
This class offers constant time performance for the basic operations (add, remove, contains and size), assuming the hash function disperses the elements properly among the buckets. Iterating over this set requires time proportional to the sum of the HashSet instance's size (the number of elements) plus the "capacity" of the backing HashMap instance (the number of buckets). Thus, it's very important not to set the initial capacity too high (or the load factor too low) if iteration performance is important.
Load factor and capacity affects HashSet's performance. Load factor is a percentage(ranging from 0 to 1) that is used for creating a threshold of when a collection will resize. By default a HashSet has an initial size of 16 and a load factor of .75. So, if Hashset's load(number of elements) is greater than 75%, its capacity will be doubled.
If we compute the percentage: 16 * .75 = 12. If HashSet has a size that is greater then 12, its capacity will be doubled. As a general rule, the default load factor (.75) offers a good tradeoff between time and space costs. Higher values decrease the space overhead but increase the lookup cost.
HashSet implements Set interface. Thus, duplicate elements are not allowed. Also, Set interface only permits one null value. We may wanna consider using HashSet if we want to store elements randomly or order of elements doesn't matter. Note that HashSet is not synchronized.
This example demonstrates HashSet.
LinkedHashSet is a hash table and linked list implementation of the Set interface, with predictable iteration order. This implementation differs from HashSet in that it maintains a doubly-linked list running through all of its entries. This linked list defines the iteration ordering, which is the order in which elements were inserted into the set (insertion-order).
Note that insertion order is not affected if an element is re-inserted into the set. (An element e is reinserted into a set s if s.add(e) is invoked when s.contains(e) would return true immediately prior to the invocation). In other words, if you re-insert an object that already exists in the list, the re-inserted object will be ejected immediately and the position of the existing one stays the same.
This class provides all of the optional Set operations, and permits null elements. Like HashSet, it provides constant-time performance for the basic operations (add, contains and remove), assuming the hash function disperses elements properly among the buckets.
Performance is likely to be just slightly below that of HashSet, due to the added expense of maintaining the linked list, with one exception: Iteration over a LinkedHashSet requires time proportional to the size of the set, regardless of its capacity. Iteration over a HashSet is likely to be more expensive, requiring time proportional to its number of elements plus its capacity.
A linked hash set has two parameters that affect its performance: initial capacity and load factor. They are defined precisely as for HashSet. Note, however, that the penalty for choosing an excessively high value for initial capacity is less severe for this class than for HashSet, as iteration times for this class are unaffected by capacity. Note that LinkedHashSet is not synchronized.
This example demonstrates LinkedHashSet.
TreeSet is a NavigableSet implementation based on a TreeMap. The elements are ordered using their natural ordering, or by a Comparator provided at set creation time, depending on which constructor is used. This implementation provides guaranteed log(n) time cost for the basic operations (add, remove and contains).
TreeSet uses binary search tree algorithm. This algorithm has great performance when it comes to add, remove and search operations. TreeSet is based on TreeMap and TreeMap implements Red-Black Tree binary search tree. TreeSet implements NavigableSet, which implements SortedSet, which implements Set. Take note, TreeSet is not synchronized.
This example demonstrates TreeSet.
EnumSet is a specialized Set implementation for use with enum types. All of the elements in an enum set must come from a single enum type that is specified, explicitly or implicitly, when the set is created. Enum sets are represented internally as bit vectors. This representation is extremely compact and efficient. The space and time performance of this class should be good enough to allow its use as a high-quality, typesafe alternative to traditional int-based "bit flags." Even bulk operations (such as containsAll and retainAll) should run very quickly if their argument is also an enum set.
The iterator returned by the iterator method traverses the elements in their natural order (the order in which the enum constants are declared). The returned iterator is weakly consistent: it will never throw ConcurrentModificationException and it may or may not show the effects of any modifications to the set that occur while the iteration is in progress.
Null elements are not permitted. Attempts to insert a null element will throw NullPointerException. Attempts to test for the presence of a null element or to remove one will, however, function properly. All basic operations execute in constant time. They are likely (though not guaranteed) to be much faster than their HashSet counterparts. Even bulk operations execute in constant time if their argument is also an enum set. Take note, EnumSet is not synchronized.
This example demonstrates EnumSet.
Map is a data structure that maps keys to values. A map cannot contain duplicate keys; each key can map to at most one value. This interface takes the place of the Dictionary class, which was a totally abstract class rather than an interface. Map is part of the Collections Framework but Map is not a subinterface of of Collection interface. Map and Collection are two separate entities in Collections Framework.
The Map interface provides three collection views, which allow a map's contents to be viewed as a set of keys, collection of values, or set of key-value mappings. The order of a map is defined as the order in which the iterators on the map's collection views return their elements. Some map implementations, like the TreeMap class, make specific guarantees as to their order; others, like the HashMap class, do not.
Note: great care must be exercised if mutable objects are used as map keys. The behavior of a map is not specified if the value of an object is changed in a manner that affects equals comparisons while the object is a key in the map. A special case of this prohibition is that it is not permissible for a map to contain itself as a key. While it is permissible for a map to contain itself as a value, extreme caution is advised: the equals and hashCode methods are no longer well defined on such a map.
HashMap is a hash table based implementation of the Map interface. This implementation provides all of the optional map operations, and permits null values and the null key. (The HashMap class is roughly equivalent to Hashtable, except that it is unsynchronized and permits nulls.) This class makes no guarantees as to the order of the map; in particular, it does not guarantee that the order will remain constant over time.
This implementation provides constant-time performance for the basic operations (get and put), assuming the hash function disperses the elements properly among the buckets. Iteration over collection views requires time proportional to the "capacity" of the HashMap instance (the number of buckets) plus its size (the number of key-value mappings).
Thus, it's very important not to set the initial capacity too high (or the load factor too low) if iteration performance is important. HashMap is very similar to HashSet due to the fact that HashSet is based on HashMap. Although, in HashMap, we can store key-value pairs because it implements Map interface instead of Set interface. Take note, HashMap is not synchronized.
This example demonstrates HashMap.
WeakHashMap is a hash table based implementation of the Map interface, with weak keys. An entry in a WeakHashMap will automatically be removed when its key is no longer in ordinary use. More precisely, the presence of a mapping for a given key will not prevent the key from being discarded by the garbage collector, that is, made finalizable, finalized, and then reclaimed. When a key has been discarded its entry is effectively removed from the map, so this class behaves somewhat differently from other Map implementations.
Both null values and the null key are supported. This class has performance characteristics similar to those of the HashMap class, and has the same efficiency parameters of initial capacity and load factor. Like most collection classes, this class is not synchronized. A synchronized WeakHashMap may be constructed using the Collections.synchronizedMap method.
To understand this collection, we need to understand java reference types. There are four types of reference types. In this topic, we're going to focus on three reference types: Strong, Soft and Weak reference types. Strong reference is a reference that we use in everyday programming.
The statement above creates a strong reference for the String object. Creating a soft and weak references are somewhat explicit. We need to instantiate a SoftReference instance to create a soft reference and we need to create a WeakReference instance to create a weak reference.
In other words, objects that only have soft references may be garbage collected if java is in need of more memory. Objects that only have weak references are short lived. They will be garbage collected within few GC cycles. WeakHashMap keys are WeakReference types. Even though values of WeakHashMap are not weak references, they will be immediately disposed once their keys are disposed.
You may wanna consider using WeakHashMap if you're planning to create a storage for temporary data. This example demonstrates WeakHashMap. Take note, WeakHashMap is unsynchronized.
More information can be found in WeakHashMap documentation.
LinkedHashMap is a hash table and linked list implementation of the Map interface, with predictable iteration order. This implementation differs from HashMap in that it maintains a doubly-linked list running through all of its entries. This linked list defines the iteration ordering, which is normally the order in which keys were inserted into the map (insertion-order).
Note that insertion order is not affected if a key is re-inserted into the map. (A key k is reinserted into a map m if m.put(k, v) is invoked when m.containsKey(k) would return true immediately prior to the invocation). A special constructor is provided to create a linked hash map whose order of iteration is the order in which its entries were last accessed, from least-recently accessed to most-recently (access-order). This kind of map is well-suited to building LRU caches.
A linked hash map has two parameters that affect its performance: initial capacity and load factor. They are defined precisely as for HashMap. Note, however, that the penalty for choosing an excessively high value for initial capacity is less severe for this class than for HashMap, as iteration times for this class are unaffected by capacity.
A structural modification is any operation that adds or deletes one or more mappings or, in the case of access-ordered linked hash maps, affects iteration order. In insertion-ordered linked hash maps, merely changing the value associated with a key that is already contained in the map is not a structural modification. In access-ordered linked hash maps, merely querying the map with get is a structural modification. Take note, LinkedHashMap is not synchronized.
This example demonstrates LinkedHashMap.
TreeMap is a Red-Black Tree based NavigableMap implementation. The map is sorted according to the natural ordering of its keys, or by a Comparator provided at map creation time, depending on which constructor is used. This implementation provides guaranteed log(n) time cost for the containsKey, get, put and remove operations. Algorithms are adaptations of those in Cormen, Leiserson, and Rivest's Introduction to Algorithms.
TreeMap is very similar to TreeSet due to the fact that TreeSet is based on TreeMap. Although, in TreeMap, we can store key-value pairs because it implements Map interface instead of Set interface. Map.Entry of this map doesn't support Map.Entry.SetValue() method. Take note, HashMap is not synchronized.
Thjs example demonstrates TreeMap.
EnumMap is a specialized Map implementation for use with enum type keys. All of the keys in an enum map must come from a single enum type that is specified, explicitly or implicitly, when the map is created. Enum maps are represented internally as arrays. This representation is extremely compact and efficient.
Enum maps are maintained in the natural order(the order in which the enum constants are declared) of their keys. This is reflected in the iterators returned by the collections views (keySet(), entrySet(), and values()). Iterators returned by the collection views are weakly consistent: they will never throw ConcurrentModificationException and they may or may not show the effects of any modifications to the map that occur while the iteration is in progress.
Null keys are not permitted. Attempts to insert a null key will throw NullPointerException. Attempts to test for the presence of a null key or to remove one will, however, function properly. Null values are permitted. Implementation note: All basic operations(add(), get(), etc.) execute in constant time. They are likely (though not guaranteed) to be faster than their HashMap counterparts. Take note, EnumMap is not synchronized.
This example demonstrates EnumMap.
Hashtable class implements a hash table, which maps keys to values. Any non-null object can be used as a key or as a value. To successfully store and retrieve objects from a hashtable, the objects used as keys must implement the hashCode method and the equals method.
An instance of Hashtable has two parameters that affect its performance: initial capacity and load factor. The capacity is the number of buckets in the hash table, and the initial capacity is simply the capacity at the time the hash table is created. Note that the hash table is open: in the case of a "hash collision", a single bucket stores multiple entries, which must be searched sequentially.
The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased. The initial capacity and load factor parameters are merely hints to the implementation. The exact details as to when and whether the rehash method is invoked are implementation-dependent.
Generally, the default load factor (.75) offers a good tradeoff between time and space costs. Higher values decrease the space overhead but increase the time cost to look up an entry (which is reflected in most Hashtable operations, including get and put).
The initial capacity controls a tradeoff between wasted space and the need for rehash operations, which are time-consuming. No rehash operations will ever occur if the initial capacity is greater than the maximum number of entries the Hashtable will contain divided by its load factor. However, setting the initial capacity too high can waste space.
As of the Java 2 platform v1.2, this class was retrofitted to implement the Map interface, making it a member of the Java Collections Framework. Unlike the new collection implementations, Hashtable is synchronized. If a thread-safe implementation is not needed, it is recommended to use HashMap in place of Hashtable. If a thread-safe highly-concurrent implementation is desired, then it is recommended to use ConcurrentHashMap in place of Hashtable.
This example demonstrates Hashtable.
We can convert a collection to another collection. For example, we can convert an ArrayList to LinkedList, ArrayDeque to ArrayList, etc. However, Some collections may not be necessary to be converted. For example, Converting HashMap to HashSet may not be necessary because HashSet is a Set implementation of HashMap and vice-versa. Also, Collection interface and Map interface have different types of data structure. So, conversion between these two may need several workarounds.
Let's convert collections that are subinterfaces of Collection interface. Most collections of the Collection interface have constructors that accept a Collection type as an argument. Take a look at this example:
If we want to convert elements of existing collection to another existing collection, we use the
Some collections have this another form of addAll():
This second form inserts all of the elements in the specified collection into the collection that calls this method, starting at the specified index.
Collections of Map interface have constructors that accept Map type as an argument. Take a look at this example:
Method form:
Conversion between Collection and Map is kinda tricky. A map requires a key-value pair whereas collection of Collection interface requires only a single value. Let's start Converting a map to collection. There are multiple ways to do this. Let's start with separately converting the keys and values of a map to a collection.
Another way of converting Map to Collection is to use the
Next, let's convert a collection of Collection interface to Map. a collection of Collection interface has only single value. To convert it to Map, we need to decide if the elements of the collection is going to be keys or values or we need to strategize on how to get keys and values from a source.
Remember that Map doesn't accept duplicate keys. When converting between Collection and Map, we may pass a duplicate key during conversion. If that happens, java will throw
.
Next argument is a value which is a length of a String element in the stream pipeline. The third argument is a BinaryOperator that is used to merge two keys. In the example above,
Some collections can be set as unmodifiable. If a collection is unmodifiable, operations like add, remove and replace will throw
Unmodifiable collection has characteristics that make it distinct from modifiable collection. Read the List, Set and Map documentation to know the characteristics of their unmodifiable collections.
This example demonstrates unmodifiable List.
Unmodifiable collections that are created by methods like
This example demonstrates unmodifiable view list.
If arrays can be multi-dimensional, collections can also be multi-dimensional. To create a multi-dimensional collection, we put a collection as an element of another collection. This example demonstrates ArrayList in an ArrayList.
- Collection Interface and Its SubInterfaces
- FIFO and LIFO
- Brief Explanation of Big O Notation
- ArrayList
- LinkedList
- ArrayDeque
- Vector
- Stack
- PriorityQueue
- HashSet
- LinkedHashSet
- TreeSet
- EnumSet
- Map Interface
- HashMap
- WeakHashMap
- LinkedHashMap
- TreeMap
- EnumMap
- Hashtable
- Converting Collections to Other Collections
- Conversion Between Collections of Collection Interface
- Conversion Between Collections of Map Interface
- Conversion Between Collection and Map
- Unmodifiable Collection
- Unmodifiable View
- Multi-dimensional Collection
Collection Interface and Its SubInterfaces
Note: Collections framework uses generics syntax. Many collections use generics. It's recommend that readers are knowledgable about Generics before reading this article.
Java has pre-defined classes that group object instances. This classes are similar to standard array. However, standard array is not optimized for some certain tasks. Collections framework, on the hand, has classes that are optimized to do certain tasks. Though, unlike standard array, Collections Framework classes only store object types.
First off, let's start with the Collection interface. Collection interface is the root interface in the collection hierarchy. A collection represents a group of objects, known as its elements. Some collections allow duplicate elements and others do not. Some are ordered and others unordered. Collection interface has three subinterfaces which are: Set, List, and Queue.
Set is a collection that contains no duplicate elements. More formally, sets contain no pair of elements e1 and e2 such that e1.equals(e2), and at most one null element. As implied by its name, this interface models the mathematical set abstraction.
Some set implementations have restrictions on the elements that they may contain. For example, some implementations prohibit null elements, and some have restrictions on the types of their elements.
List is an ordered collection (also known as a sequence). The user of this interface has precise control over where in the list each element is inserted. The user can access elements by their integer index (position in the list), and search for elements in the list.
Unlike sets, lists typically allow duplicate elements. More formally, lists typically allow pairs of elements e1 and e2 such that e1.equals(e2), and they typically allow multiple null elements if they allow null elements at all. It is not inconceivable that someone might wish to implement a list that prohibits duplicates, by throwing runtime exceptions when the user attempts to insert them, but this usage is expected to be rare.
Queue is a collection used to hold multiple elements prior to processing. Besides basic Collection operations, a Queue provides additional insertion, extraction, and inspection operations. Queues typically, but do not necessarily, order elements in a FIFO (first-in, first-out) manner. Among the exceptions are priority queues, which order elements according to a supplied comparator or the elements' natural ordering.
Whatever the ordering used, the head of the queue is the element that would be removed by a call to remove or poll. In a FIFO queue, all new elements are inserted at the tail of the queue. Other kinds of queues may use different placement rules. Every Queue implementation must specify its ordering properties.
Deque is a linear collection that supports element insertion and removal at both ends. The name deque is short for "double ended queue" and is usually pronounced "deck". Most Deque implementations place no fixed limits on the number of elements they may contain, but this interface supports capacity-restricted deques as well as those with no fixed size limit.
These interfaces are the core backbones of some concrete classes that we will discuss next.
FIFO and LIFO
In previous topic, we encountered the word "FIFO". In this topic, I'm going to briefly discuss FIFO(First-In-First-Out) and LIFO(Last-In-First-Out). FIFO is a process of handling collection of data where the first data comes in and comes out first. FIFO is analogous to people climbing up a ladder. The very first person that climbed up the ladder is the first to leave the upper end of the ladder.
LIFO is a process of handling collection of data where the last data is the first one to come out. LIFO is analogous to books that stack up on top of one another. The first book that we put on a stack is at the bottom and newly added books are always on top of the old ones. To get the book on the bottom of a stack, we practically pick off the book on top first until we reach the book on the bottom.
Brief Explanation of Big O Notation
In later topics, you may encounter these mathematical functions: O(n), O(1), etc. These functions are called Big O notation. I will briefly explain Big O notation here, so you're not clueless if you encounter one of those functions.
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation.
In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows. We don't need to do computations to understand this concept. Although, you need to be knowledgable about math to follow through.
In this topic, we're going to discuss eight functions of Big O notation. So, what are functions. In your high school or college math, I assume you encountered this equation:
f(x) = n
. f(x)
is a function, x
is an input and n
is an output. For example,
this equation:
f(x) = x + x = n
has an input of 1:
f(1) = 1 + 1 = 2
.In Big O notation, we will replace
f
with big O
and x
with n
. Now, take a look at this illustration.Let's start with
O(1)
. O(1)
means constant time. As you can see from the function, the input is constant. To put it simply, the time to do a task remains constant, no matter how big or small the data input size. O(log n)
or O(log(n))
is a logarithmic time. This time complexity slightly increases time as input data gets larger.
O(n)
is linear time. This time complexity linearly increases time as input data gets larger. O(n log n)
or O(n log(n))
is linearithmic time, this time complexity significantly increases time if input data is large and continuously getting larger. The functions that I discussed above are often implemented in computer algorithms.
O(n^2)
or quadratic time; O(n^3)
or cubic time; O(2^n)
or exponential time; O(n!)
or factorial time are rarely implemented in computer algorithms. There are other Big O functions that I didn't include. You can see them in this article.
In later chapters, you may encounter amortized constant time. Amortized also known as amortized analysis is a method for analyzing a given algorithm's complexity, or how much of a resource, especially time or memory, it takes to execute. The motivation for amortized analysis is, that looking at the worst-case run time can be too pessimistic.
Instead, amortized analysis averages the running times of operations in a sequence over that sequence. So, we could say that amortized constant time is a constant time with inconsistencies.
ArrayList
ArrayList is a Resizable-array implementation of the List interface. Implements all optional list operations, and permits all elements, including null. In addition to implementing the List interface, this class provides methods to manipulate the size of the array that is used internally to store the list. (This class is roughly equivalent to Vector, except that it is unsynchronized.)
The size, isEmpty, get, set, iterator, and listIterator operations run in constant time. The add operation runs in amortized constant time, that is, adding n elements requires O(n) time. All of the other operations run in linear time (roughly speaking). The constant factor is low compared to that for the LinkedList implementation.
Normally, add operation of ArrayList runs in
O(1)
time. ArrayList capacity dynamically increases once the ArrayList is full or elements can't fit in the current capacity. This resizing operation makes the add operation of ArrayList to run in O(n)
time.
During resizing, arraylist will create a new block(or container) that has double the size of the old block and then arraylist puts the elements of the old block to the new block and then disposes the old block to free up memory.
This takes time because this operation has several sub-operations and add operation is just one of those sub-operations. That's why setting the ideal capacity of arraylist based on the need of your program may boost performance. By default, arraylist has an initial capacity of 10.
Generally, arraylist is ideally better at accessing elements in a random access manner. Since arraylist elements are in a block, tracking and accessing those elements are fairly easy because they are in a limited space.
This example demonstrates ArrayList.
import java.util.ArrayList; public class SampleClass{ public static void main(String[]args){ //Array instance with initial capacity //of 10 ArrayList<String> arrList = new ArrayList<>(); //add operation System.out.println("Add values..."); arrList.add("String1"); arrList.add("String2"); arrList.add("String3"); //isEmpty() returns true if the array list is //empty. Otherwise, returns false System.out.println("isEmpty: " + arrList.isEmpty()); //size() returns the number of elements System.out.println("size: " + arrList.size()); //get() returns the element from a specfied index System.out.println("Get value..."); String elem = arrList.get(1); System.out.println("Element: " + elem); //set() replaces the element from a specified index //with specified object System.out.println("Replace value..."); String replacement = "String2a"; arrList.set(1,replacement); System.out.println("Element: " + arrList.get(1)); //contains(Object o) returns true if the specified //object exists in array list such that Objects.equals(o, e) System.out.println("Verify value..."); boolean bool = arrList.contains(replacement); System.out.println("String2a exists?: " + bool); //remove() removes an element System.out.println("Remove value..."); arrList.remove(1); System.out.println("size: " + arrList.size()); //clear() removes all elements System.out.println("Clear value..."); arrList.clear(); System.out.println("size: " + arrList.size()); //toArray() converts arraylist to a standard //array. The type of the returned standard array //is Object and not the type in the type parameter //because this method is invoked at runtime // //Other collections that extends Collection //interface can use toArray() Object[] myArray = arrList.toArray(); //error //String[] myArray = arrList.toArray(); } } Result Add values... isEmpty: false size: 3 Get value... Element: String2 Replace value... Element: String2a Verify value... String2a exists?: true Remove value... size: 2 Clear value... size: 0More methods can be seen in ArrayList Documentation.
LinkedList
LinkedList is a Doubly-linked list implementation of the List and Deque interfaces. Implements all optional list operations, and permits all elements (including null). This collection is unsynchronized.
All of the operations perform as could be expected for a doubly-linked list. Operations that index into the list will traverse the list from the beginning or the end, whichever is closer to the specified index.
Generally, linkedlist is ideally better at accessing elements in sequence or picking off elements at the beginning or end of a collection. Java linkedlist is a doubly-link list that relies on nodes instead of memory block. Elements can be everywhere in the memory space because each element is connected to one another. Each element has addresses of their previous and next elements.
LinkedList is not cache-friendly. Elements of Linkedlist are scattered throughout memory space. Thus, caching them could become problematic. LinkedList is fairly slow when randomly accessing elements due to the fact that it may need to traverse a number of nodes just to find the element that we're trying to access. LinkedList can start traversing from the first or last element.
This example demonstrates LinkedList.
import java.util.LinkedList; public class SampleClass{ public static void main(String[] args){ LinkedList<Integer> ll = new LinkedList<Integer>(); //add operation System.out.println("Adding elements..."); ll.add(50); ll.add(77); ll.add(254); ll.add(29); //size() method System.out.println("Size: " + ll.size()); System.out.println("First element: " + ll.get(0)); System.out.println("Remove first element..."); //remove() method that removes the first //element of LinkedList ll.remove(); System.out.println("First element: " + ll.get(0)); //Using Queue functionalities... System.out.println(); System.out.println("Using Deque Functionalities"); System.out.println("First element: " + ll.get(0)); System.out.println("Add Element at the beginning"); //addFirst() adds element at the beginning of this //list ll.addFirst(100); System.out.println("First element: " + ll.get(0)); System.out.println("Last element: " + ll.get(ll.size()-1)); System.out.println("Add Element at the end"); //addLast() adds element at the end of this //list ll.addLast(300); System.out.println("Last element: " + ll.get(ll.size()-1)); System.out.println("Get first element"); //peekFirst() returns the element at the beginning of this //list. retruns null if list is empty. System.out.println("First element: " + ll.peekFirst()); System.out.println("Get last element"); //peekLast() returns the element at the end of this //list. retruns null if list is empty. System.out.println("Last element: " + ll.peekLast()); System.out.println("Current size: " + ll.size()); System.out.println("Get and remove first element"); //pollFirst() returns and remove the element at the //beginning of this list. retruns null if list is empty. System.out.println("First element: " + ll.pollFirst()); System.out.println("Size: " + ll.size()); System.out.println("Get and remove last element"); //pollLast() returns the element at the end of this //list. retruns null if list is empty. System.out.println("Last element: " + ll.pollLast()); System.out.println("Size: " + ll.size()); } } Result Adding elements... Size: 4 First element: 50 Remove first element... First element: 77 Using Deque Functionalities First element: 77 Add element at the beginning First element: 100 Last element: 29 Add element at the end First element: 300 Get first element First element: 100 Get last element First element: 300 Current size: 5 Get and remove first element First element: 100 Size: 4 Get and remove last element Last element: 300 Size: 3Other methods that haven't been shown here can be seen in LinkedList documentation.
ArrayDeque
ArrayDeque is a resizable-array implementation of the Deque interface. Array deques have no capacity restrictions; they grow as necessary to support usage. They are not thread-safe; in the absence of external synchronization, they do not support concurrent access by multiple threads. Null elements are prohibited.
This class is likely to be faster than Stack when used as a stack, and faster than LinkedList when used as a queue. Most ArrayDeque operations run in amortized constant time. Exceptions include remove, removeFirstOccurrence, removeLastOccurrence, contains, iterator.remove(), and the bulk operations, all of which run in linear time.
Many people compare ArrayDeque with LinkedList because these two are often used for picking off elements at the beginning or end of a collection. According to the number of articles that I have read on the internet, many people are in favor of ArrayDeque than LinkedList. Although, rigorous testing is still the best way to decide which collection algorithm to use.
This example demonstrates ArrayDeque.
import java.util.ArrayDeque; public class SampleClass{ public static void main(String[] args){ //Instance of ArrayDeque with initial //size of 16 ArrayDeque<String> ad = new ArrayDeque<>(); //addFirst() add elements at the //beginning of arraydeque System.out.println("Adding elements " + "at the beginning of"+ " arraydeque..."); ad.addFirst("Lemon"); ad.addFirst("Cherry"); ad.addFirst("Apple"); ad.addFirst("Orange"); System.out.println("Elements..."); for(String str : ad) System.out.println(str); System.out.println("Current Size: " + ad.size()); System.out.println("retreiving elements " + "at the end of arraydeque..."); System.out.println("Elements..."); String elem = ad.pollLast(); while(elem != null){ System.out.println(elem); elem = ad.pollLast(); } System.out.println("Current Size: " + ad.size()); } } Result Adding elements at the beginning of arraydeque... Elements... Orange Apple Cherry Lemon Current Size: 4 retreiving elements at the end of arraydeque... Elements... Lemon Cherry Apple Orange Current size: 0More methods can be seen in the ArrayDeque documentation.
Vector
The Vector class implements a growable array of objects. Like an array, it contains components that can be accessed using an integer index. However, the size of a Vector can grow or shrink as needed to accommodate adding and removing items after the Vector has been created.
Each vector tries to optimize storage management by maintaining a capacity and a capacityIncrement. The capacity is always at least as large as the vector size; it is usually larger because as components are added to the vector, the vector's storage increases in chunks the size of capacityIncrement.
An application can increase the capacity of a vector before inserting a large number of components; this reduces the amount of incremental reallocation. Vector is synchronized. If a thread-safe implementation is not needed, it is recommended to use ArrayList in place of Vector.
This example demonstrates Vector.
import java.util.Vector; public class SampleClass{ public static void main(String[] args){ Vector<String> vector = new Vector<>(); System.out.println("add elements..."); //add() adds element at the end of //vector vector.add("Lemon"); vector.add("Cherry"); vector.add("Apple"); vector.add("Orange"); System.out.println("Elements..."); for(String str : vector) System.out.println(str); System.out.println("Capacity: " + vector.capacity()); System.out.println("Size: " + vector.size()); System.out.println("Get element at index 1"); //get() returns an object at the specified //index in this vector String elem = vector.get(1); System.out.println("Element: " + elem); System.out.println("Size: " + vector.size()); System.out.println("Remove element at index 1"); //remove() removes an object at the specified //index in this vector vector.remove(1); System.out.println("Size: " + vector.size()); System.out.println("Clear vector"); vector.clear(); System.out.println("Size: " + vector.size()); } } Result add elements... Elements... Lemon Cherry Apple Orange Capacity: 10 Size: 4 Get element at index 1 Element: Cherry Remove element at index 1 Size: 3 Clear vector Size: 0More information about Vector can be seen in its documentation.
Stack
The Stack class represents a last-in-first-out (LIFO) stack of objects. It extends class Vector with five operations that allow a vector to be treated as a stack. The usual push and pop operations are provided, as well as a method to peek at the top item on the stack, a method to test for whether the stack is empty, and a method to search the stack for an item and discover how far it is from the top.
When a stack is first created, it contains no items. A more complete and consistent set of LIFO stack operations is provided by the Deque interface and its implementations, which should be used in preference to this class. This class is a legacy collection. You will mostly see this in old java applications. In modern java applications, ArrayDeque or other Deque implemantations are used.
This example demonstrates Stack.
import java.util.Stack; public class SampleClass{ public static void main(String[] args){ Stack<String> stack = new Stack<>(); System.out.println("Empty Stack?: " + stack.empty()); System.out.println("Push elements on top of stack..."); //push() method pushes elements at the top of stack stack.push("Lemon"); stack.push("Cherry"); stack.push("Apple"); stack.push("Orange"); System.out.println("Empty Stack?: " + stack.empty()); System.out.println("Current Size: " + stack.size()); System.out.println("Peek and find \"Apple\"."); int sz = stack.size(); String item = null; while(sz != 0){ //peek() returns an element on top of this stack. //This method doesn't remove elements //in the stack item = stack.peek(); if(item.equals("Apple")){ System.out.println(item + " found!"); break; } else System.out.println(item + " is not Apple."); //pop() retrieves an element on top of this //stack. This method removes elements in //the stack stack.pop(); sz = stack.size(); } System.out.println("Remaining items: " + stack.size()); } } Result Empty Stack?: true Push elements on top of stack... Empty Stack?: false Current Size: 4 Pick and find "Apple". Orange is not Apple. Apple found! Remaining items: 3More information about Vector can be found in its documentation.
PriorityQueue
PriorityQueue is an unbounded queue based on a priority heap. Unbounded queue is a queue that is not bounded by capacity. Although, PriorityQueue has internal capacity that automatically grows when it's needed.
The elements of the priority queue are ordered according to their natural ordering, or by a Comparator provided at queue construction time, depending on which constructor is used. A priority queue does not permit null elements.
A priority queue relying on natural ordering also does not permit insertion of non-comparable objects (doing so may result in ClassCastException). The head of this queue is the least element with respect to the specified ordering.
For example, we have this collection of Strings:
{"Zebra", "Dog", "Lion", "Cat"}
If our PriorityQueue uses default ordering which is natural ordering, PriorityQueue will prioritize the strings like this:
{"Cat", "Dog","Lion","Zebra"}
"Cat" is the first element which is the head of the queue. Thus, "Cat" is the least element, followed by "Dog" and so on. An element located at the head in a PriorityQueue is the first element to be retrieved.
If multiple elements are tied for least value, the head is one of those elements -- ties are broken arbitrarily. For example, Assuming our PriorityQueue is using natural ordering, if three String objects have the same value, they are going to be randomly ordered next to one another. The queue retrieval operations poll, remove, peek, and element access the element at the head of the queue.
this implementation provides
O(log(n))
time for the enqueuing and dequeuing methods (offer, poll, remove() and add); linear time for the remove(Object) and contains(Object) methods; and constant time for the retrieval methods (peek, element, and size). Take note, this implementation is not synchronized.
This example demonstrates PriorityQueue.
import java.util.PriorityQueue; public class SampleClass{ public static void main(String[] args){ PriorityQueue<String> pq = new PriorityQueue<>(); System.out.println("Add elements..."); //offer() add elements to this //PriorityQueue pq.offer("Ocelot"); pq.offer("Zebra"); pq.offer("Dog"); pq.offer("Lion"); pq.offer("Cat"); //poll() retrieves the element at the head //in this PriorityQueue. Returns null if //PriorityQueue is empty. String elem = pq.poll(); System.out.println("Display elements from the\n" +"head(top) up to the tail(bottom)\n" +"of this queue."); while(elem != null){ System.out.println(elem); elem = pq.poll(); } } } Result Add elements... Display elements from the head up to the tail(bottom) of this queue. Cat Dog Lion Ocelot ZebraMore information can be found in PriorityQueue documentation.
HashSet
HashSet implements the Set interface, backed by a hash table (actually a HashMap instance). It makes no guarantees as to the iteration order of the set; in particular, it does not guarantee that the order will remain constant over time. This class permits the null element.
This class offers constant time performance for the basic operations (add, remove, contains and size), assuming the hash function disperses the elements properly among the buckets. Iterating over this set requires time proportional to the sum of the HashSet instance's size (the number of elements) plus the "capacity" of the backing HashMap instance (the number of buckets). Thus, it's very important not to set the initial capacity too high (or the load factor too low) if iteration performance is important.
Load factor and capacity affects HashSet's performance. Load factor is a percentage(ranging from 0 to 1) that is used for creating a threshold of when a collection will resize. By default a HashSet has an initial size of 16 and a load factor of .75. So, if Hashset's load(number of elements) is greater than 75%, its capacity will be doubled.
If we compute the percentage: 16 * .75 = 12. If HashSet has a size that is greater then 12, its capacity will be doubled. As a general rule, the default load factor (.75) offers a good tradeoff between time and space costs. Higher values decrease the space overhead but increase the lookup cost.
HashSet implements Set interface. Thus, duplicate elements are not allowed. Also, Set interface only permits one null value. We may wanna consider using HashSet if we want to store elements randomly or order of elements doesn't matter. Note that HashSet is not synchronized.
This example demonstrates HashSet.
import java.util.HashSet; public class SampleClass{ public static void main(String[] args){ HashSet<Integer> hs = new HashSet<>(); hs.add(2); hs.add(3); hs.add(5); hs.add(13); hs.add(7); hs.add(null); hs.add(11); hs.add(13); hs.add(null); for(Integer i : hs) System.out.println(i); } } Result null 2 3 5 7 11 13More information can be found in HashSet documentation.
LinkedHashSet
LinkedHashSet is a hash table and linked list implementation of the Set interface, with predictable iteration order. This implementation differs from HashSet in that it maintains a doubly-linked list running through all of its entries. This linked list defines the iteration ordering, which is the order in which elements were inserted into the set (insertion-order).
Note that insertion order is not affected if an element is re-inserted into the set. (An element e is reinserted into a set s if s.add(e) is invoked when s.contains(e) would return true immediately prior to the invocation). In other words, if you re-insert an object that already exists in the list, the re-inserted object will be ejected immediately and the position of the existing one stays the same.
This class provides all of the optional Set operations, and permits null elements. Like HashSet, it provides constant-time performance for the basic operations (add, contains and remove), assuming the hash function disperses elements properly among the buckets.
Performance is likely to be just slightly below that of HashSet, due to the added expense of maintaining the linked list, with one exception: Iteration over a LinkedHashSet requires time proportional to the size of the set, regardless of its capacity. Iteration over a HashSet is likely to be more expensive, requiring time proportional to its number of elements plus its capacity.
A linked hash set has two parameters that affect its performance: initial capacity and load factor. They are defined precisely as for HashSet. Note, however, that the penalty for choosing an excessively high value for initial capacity is less severe for this class than for HashSet, as iteration times for this class are unaffected by capacity. Note that LinkedHashSet is not synchronized.
This example demonstrates LinkedHashSet.
import java.util.LinkedHashSet; public class SampleClass{ public static void main(String[] args){ LinkedHashSet<Integer> hs = new LinkedHashSet<>(); hs.add(2); hs.add(3); hs.add(5); hs.add(13); hs.add(7); hs.add(null); hs.add(11); hs.add(13); hs.add(null); for(Integer i : hs) System.out.println(i); } } Result 2 3 5 13 7 null 11More information can be found in LinkedHashSet documentation.
TreeSet
TreeSet is a NavigableSet implementation based on a TreeMap. The elements are ordered using their natural ordering, or by a Comparator provided at set creation time, depending on which constructor is used. This implementation provides guaranteed log(n) time cost for the basic operations (add, remove and contains).
TreeSet uses binary search tree algorithm. This algorithm has great performance when it comes to add, remove and search operations. TreeSet is based on TreeMap and TreeMap implements Red-Black Tree binary search tree. TreeSet implements NavigableSet, which implements SortedSet, which implements Set. Take note, TreeSet is not synchronized.
This example demonstrates TreeSet.
import java.util.TreeSet; public class SampleClass{ public static void main(String[] args){ TreeSet<String> ts = new TreeSet<>(); System.out.println("Add elements..."); ts.add("Lemon"); ts.add("Cherry"); ts.add("Apple"); ts.add("Orange"); System.out.println("Elements..."); for(String str : ts) System.out.println(str); ts = (TreeSet<String>)ts.descendingSet(); System.out.println("Elements(reversed)..."); for(String str : ts) System.out.println(str); System.out.println("Look for \"Apple\"..."); if(ts.contains("Apple")) System.out.println("Apple found!"); else System.out.println("Apple doesn't exists!"); System.out.println("Remove \"Apple\"..."); if(ts.remove("Apple")) System.out.println("Apple removed!"); else System.out.println("Apple may not exist! can't remove!"); } } Result Add elements... Elements... Apple Cherry Lemon Orange Elements(reversed)... Orange Lemon Cherry Apple Look for "Apple"... Apple found! Remove "Apple"... Apple removed!More information can be found in TreeSet documentation.
EnumSet
EnumSet is a specialized Set implementation for use with enum types. All of the elements in an enum set must come from a single enum type that is specified, explicitly or implicitly, when the set is created. Enum sets are represented internally as bit vectors. This representation is extremely compact and efficient. The space and time performance of this class should be good enough to allow its use as a high-quality, typesafe alternative to traditional int-based "bit flags." Even bulk operations (such as containsAll and retainAll) should run very quickly if their argument is also an enum set.
The iterator returned by the iterator method traverses the elements in their natural order (the order in which the enum constants are declared). The returned iterator is weakly consistent: it will never throw ConcurrentModificationException and it may or may not show the effects of any modifications to the set that occur while the iteration is in progress.
Null elements are not permitted. Attempts to insert a null element will throw NullPointerException. Attempts to test for the presence of a null element or to remove one will, however, function properly. All basic operations execute in constant time. They are likely (though not guaranteed) to be much faster than their HashSet counterparts. Even bulk operations execute in constant time if their argument is also an enum set. Take note, EnumSet is not synchronized.
This example demonstrates EnumSet.
import java.util.EnumSet; public class SampleClass{ public static void main(String[] args){ EnumSet<Fruits> es = EnumSet.allOf(Fruits.class); for(Fruits f : es) System.out.println(f); } } enum Fruits{ APPLE, AVOCADO, BANANA, ORANGE } Result APPLE AVOCADO BANANA ORANGEIn the example above, we fill up the EnumSet using
allOf()
method. Fruits.class
is a Class<T> type the refers to the .class file of Fruits class/enum. To create an empty EnumSet, use the noneOf()
method.
import java.util.EnumSet; public class SampleClass{ public static void main(String[] args){ EnumSet<Fruits> es = EnumSet.noneOf(Fruits.class); es.add(Fruits.APPLE); es.add(Fruits.ORANGE); for(Fruits f : es) System.out.println(f); } } enum Fruits{ APPLE, AVOCADO, BANANA, ORANGE } Result APPLE ORANGE
Map Interface
Map is a data structure that maps keys to values. A map cannot contain duplicate keys; each key can map to at most one value. This interface takes the place of the Dictionary class, which was a totally abstract class rather than an interface. Map is part of the Collections Framework but Map is not a subinterface of of Collection interface. Map and Collection are two separate entities in Collections Framework.
The Map interface provides three collection views, which allow a map's contents to be viewed as a set of keys, collection of values, or set of key-value mappings. The order of a map is defined as the order in which the iterators on the map's collection views return their elements. Some map implementations, like the TreeMap class, make specific guarantees as to their order; others, like the HashMap class, do not.
Note: great care must be exercised if mutable objects are used as map keys. The behavior of a map is not specified if the value of an object is changed in a manner that affects equals comparisons while the object is a key in the map. A special case of this prohibition is that it is not permissible for a map to contain itself as a key. While it is permissible for a map to contain itself as a value, extreme caution is advised: the equals and hashCode methods are no longer well defined on such a map.
HashMap
HashMap is a hash table based implementation of the Map interface. This implementation provides all of the optional map operations, and permits null values and the null key. (The HashMap class is roughly equivalent to Hashtable, except that it is unsynchronized and permits nulls.) This class makes no guarantees as to the order of the map; in particular, it does not guarantee that the order will remain constant over time.
This implementation provides constant-time performance for the basic operations (get and put), assuming the hash function disperses the elements properly among the buckets. Iteration over collection views requires time proportional to the "capacity" of the HashMap instance (the number of buckets) plus its size (the number of key-value mappings).
Thus, it's very important not to set the initial capacity too high (or the load factor too low) if iteration performance is important. HashMap is very similar to HashSet due to the fact that HashSet is based on HashMap. Although, in HashMap, we can store key-value pairs because it implements Map interface instead of Set interface. Take note, HashMap is not synchronized.
This example demonstrates HashMap.
import java.util.HashMap; import java.util.Map; public class SampleClass{ public static void main(String[] args){ HashMap<Integer,String> hm = new HashMap<>(); System.out.println("Add key-value pairs..."); hm.put(1,"Alligator"); hm.put(2,"Dog"); hm.put(3,"Spider"); hm.put(4,"Cat"); System.out.println("Loop through HashMap..."); for(Map.Entry entry : hm.entrySet()) System.out.println(entry.getKey() + " | " + entry.getValue()); System.out.println("Get single element..."); //get() returns a mapped value from a key. //returns null if the specified key //can't be mapped String value = hm.get(3); System.out.println("value: " + value); System.out.println("Current size: " + hm.size()); System.out.println("Remove an element..."); hm.remove(3); System.out.println("New size: " + hm.size()); } } Result Add key-value pairs... Loop through HashMap... 1 | Alligator 2 | Dog 3 | Spider 4 | Cat Get single element... value: Spider Current size: 4 Remove an element... New size: 3In the example above, we use entrySet() method and Map.Entry static interface. Collection that implements Map like HashMap wraps each of their key-value pairs in Map.Entry static interface. Map.Entry stores a singe key-value pair of a Map. entrySet() method returns all key-value pairs of a Map. The key-value pairs are wrapped in a Set collection. More information can be found in the HashMap documentation.
WeakHashMap
WeakHashMap is a hash table based implementation of the Map interface, with weak keys. An entry in a WeakHashMap will automatically be removed when its key is no longer in ordinary use. More precisely, the presence of a mapping for a given key will not prevent the key from being discarded by the garbage collector, that is, made finalizable, finalized, and then reclaimed. When a key has been discarded its entry is effectively removed from the map, so this class behaves somewhat differently from other Map implementations.
Both null values and the null key are supported. This class has performance characteristics similar to those of the HashMap class, and has the same efficiency parameters of initial capacity and load factor. Like most collection classes, this class is not synchronized. A synchronized WeakHashMap may be constructed using the Collections.synchronizedMap method.
To understand this collection, we need to understand java reference types. There are four types of reference types. In this topic, we're going to focus on three reference types: Strong, Soft and Weak reference types. Strong reference is a reference that we use in everyday programming.
String str = "String";
The statement above creates a strong reference for the String object. Creating a soft and weak references are somewhat explicit. We need to instantiate a SoftReference instance to create a soft reference and we need to create a WeakReference instance to create a weak reference.
public class SampleClass{ public static void main(String[] args){ //Strong reference String str1 = "String1"; String str2 = "String2"; //soft reference SoftReference<String> sr = new SoftReference<>(str1); //dereference strong reference from //"String1" object str1 = null; //weak reference WeakReference<String> wr = new WeakReference<>(str2); //dereference strong reference from //"String2" object str2 = null; } }So, What are the difference between these references. Let's start with strong reference. If an object has a strong reference pointing to it, the object won't be collected by GC(Garbage Collector) as long as the strong reference is pointing to the object. If an object only has soft reference pointing to it, the object may be garbage collected due to memory demand.
In other words, objects that only have soft references may be garbage collected if java is in need of more memory. Objects that only have weak references are short lived. They will be garbage collected within few GC cycles. WeakHashMap keys are WeakReference types. Even though values of WeakHashMap are not weak references, they will be immediately disposed once their keys are disposed.
You may wanna consider using WeakHashMap if you're planning to create a storage for temporary data. This example demonstrates WeakHashMap. Take note, WeakHashMap is unsynchronized.
import java.util.WeakHashMap; public class SampleClass{ public static void main(String[] args){ WeakHashMap<ClassA,String> hm = new WeakHashMap<>(); ClassA key = new ClassA("Key"); System.out.println("Add key-value pair..."); hm.put(key,"Alligator"); key = null; System.gc(); while(!hm.isEmpty()){ System.out.println("Waiting for WeakHashMap"+ " to be empty."); } System.out.println("WeakHashMap is empty!"); } } class ClassA{ private String str; ClassA(String str){ this.str = str; } public String getStr(){ return str; } }In the example above, the key-value pair that we put in WeakHashMap will be elligible for garbage-collection once we dereference a strong reference pointing to ClassA object. The example above loops or may loop infinitely if we change the key to some general-purpose classes like String or wrapper classes. This happens because classes like String and wrapper classes are treated differently against user-defined classes.
More information can be found in WeakHashMap documentation.
LinkedHashMap
LinkedHashMap is a hash table and linked list implementation of the Map interface, with predictable iteration order. This implementation differs from HashMap in that it maintains a doubly-linked list running through all of its entries. This linked list defines the iteration ordering, which is normally the order in which keys were inserted into the map (insertion-order).
Note that insertion order is not affected if a key is re-inserted into the map. (A key k is reinserted into a map m if m.put(k, v) is invoked when m.containsKey(k) would return true immediately prior to the invocation). A special constructor is provided to create a linked hash map whose order of iteration is the order in which its entries were last accessed, from least-recently accessed to most-recently (access-order). This kind of map is well-suited to building LRU caches.
A linked hash map has two parameters that affect its performance: initial capacity and load factor. They are defined precisely as for HashMap. Note, however, that the penalty for choosing an excessively high value for initial capacity is less severe for this class than for HashMap, as iteration times for this class are unaffected by capacity.
A structural modification is any operation that adds or deletes one or more mappings or, in the case of access-ordered linked hash maps, affects iteration order. In insertion-ordered linked hash maps, merely changing the value associated with a key that is already contained in the map is not a structural modification. In access-ordered linked hash maps, merely querying the map with get is a structural modification. Take note, LinkedHashMap is not synchronized.
This example demonstrates LinkedHashMap.
import java.util.LinkedHashMap; import java.util.Map; public class SampleClass{ public static void main(String[] args){ //This constructor sets the initial //capacity, load factor and the ordering //of this map. true for access-order, //false for insertion-order //access-order orders elements from //least-recently accessed to most-recently LinkedHashMap<Integer,String> hm = new LinkedHashMap<>(16, 0.75f,true); System.out.println("Add key-value pairs..."); hm.put(1,"Alligator"); hm.put(2,"Cat"); hm.put(3,"Dog"); hm.put(4,"Spider"); System.out.println("Iterate through collection..."); for(Map.Entry entry : hm.entrySet()) System.out.println(entry.getKey() + " | " + entry.getValue()); System.out.println("Get values..."); String value = hm.get(2); System.out.println("Value: " + value); value = hm.get(3); System.out.println("Value: " + value); System.out.println("Iterate through collection..."); for(Map.Entry entry : hm.entrySet()) System.out.println(entry.getKey() + " | " + entry.getValue()); } } Result Add key-value pairs... Iterate through collection... 1 | Alligator 2 | Cat 3 | Dog 4 | Spider Get values... Value: Cat Value: Dog Iterate through collection... 1 | Alligator 4 | Spider 2 | Cat 3 | DogMore information can be found in LinkedHashMap documentation.
TreeMap
TreeMap is a Red-Black Tree based NavigableMap implementation. The map is sorted according to the natural ordering of its keys, or by a Comparator provided at map creation time, depending on which constructor is used. This implementation provides guaranteed log(n) time cost for the containsKey, get, put and remove operations. Algorithms are adaptations of those in Cormen, Leiserson, and Rivest's Introduction to Algorithms.
TreeMap is very similar to TreeSet due to the fact that TreeSet is based on TreeMap. Although, in TreeMap, we can store key-value pairs because it implements Map interface instead of Set interface. Map.Entry of this map doesn't support Map.Entry.SetValue() method. Take note, HashMap is not synchronized.
Thjs example demonstrates TreeMap.
import java.util.TreeMap; import java.util.Map; public class SampleClass{ public static void main(String[] args){ TreeMap<Integer,String> tm = new TreeMap<>(); System.out.println("Add key-value pairs..."); tm.put(1,"Alligator"); tm.put(2,"Cat"); tm.put(3,"Dog"); tm.put(4,"Spider"); System.out.println("natural ordering..."); for(Map.Entry entry : tm.entrySet()) System.out.println(entry.getKey() + " | " + entry.getValue()); System.out.println("Reverse natural ordering..."); //descendingMap() reverse the order of //this TreeMap Map<Integer,String> map = tm.descendingMap(); for(Map.Entry entry : map.entrySet()) System.out.println(entry.getKey() + " | " + entry.getValue()); } }
EnumMap
EnumMap is a specialized Map implementation for use with enum type keys. All of the keys in an enum map must come from a single enum type that is specified, explicitly or implicitly, when the map is created. Enum maps are represented internally as arrays. This representation is extremely compact and efficient.
Enum maps are maintained in the natural order(the order in which the enum constants are declared) of their keys. This is reflected in the iterators returned by the collections views (keySet(), entrySet(), and values()). Iterators returned by the collection views are weakly consistent: they will never throw ConcurrentModificationException and they may or may not show the effects of any modifications to the map that occur while the iteration is in progress.
Null keys are not permitted. Attempts to insert a null key will throw NullPointerException. Attempts to test for the presence of a null key or to remove one will, however, function properly. Null values are permitted. Implementation note: All basic operations(add(), get(), etc.) execute in constant time. They are likely (though not guaranteed) to be faster than their HashMap counterparts. Take note, EnumMap is not synchronized.
This example demonstrates EnumMap.
import java.util.EnumMap; import java.util.Map; public class SampleClass{ public static void main(String[] args){ EnumMap<Key, String> em = new EnumMap<>(Key.class); System.out.println("Add key-value pairs..."); em.put(Key.WHITEKEY,"Wooden Box"); em.put(Key.BLUEKEY,"Metal Box"); em.put(Key.BLACKKEY,"Wooden Box"); em.put(Key.GREENKEY,"Metal Box"); System.out.println("Iterate through EnumMap..."); for(Map.Entry entry : em.entrySet()) System.out.println(entry.getKey() + " | " + entry.getValue()); } } enum Key{ BLUEKEY, WHITEKEY, GREENKEY, BLACKKEY } Result Add key-value pairs... Iterate through EnumMap... BLUEKEY | Metal Box WHITEKEY | Wooden Box GREENKEY | Metal Box BLACKKEY | Wooden BoxIn the example above, you might wonder whatis Key.class. Key is the enum name and .class is a syntax that refers to the classes and interfaces in a running Java application. Class class in java holds those classes and interfaces. More information can be found in EnumMap documentation.
Hashtable
Hashtable class implements a hash table, which maps keys to values. Any non-null object can be used as a key or as a value. To successfully store and retrieve objects from a hashtable, the objects used as keys must implement the hashCode method and the equals method.
An instance of Hashtable has two parameters that affect its performance: initial capacity and load factor. The capacity is the number of buckets in the hash table, and the initial capacity is simply the capacity at the time the hash table is created. Note that the hash table is open: in the case of a "hash collision", a single bucket stores multiple entries, which must be searched sequentially.
The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased. The initial capacity and load factor parameters are merely hints to the implementation. The exact details as to when and whether the rehash method is invoked are implementation-dependent.
Generally, the default load factor (.75) offers a good tradeoff between time and space costs. Higher values decrease the space overhead but increase the time cost to look up an entry (which is reflected in most Hashtable operations, including get and put).
The initial capacity controls a tradeoff between wasted space and the need for rehash operations, which are time-consuming. No rehash operations will ever occur if the initial capacity is greater than the maximum number of entries the Hashtable will contain divided by its load factor. However, setting the initial capacity too high can waste space.
As of the Java 2 platform v1.2, this class was retrofitted to implement the Map interface, making it a member of the Java Collections Framework. Unlike the new collection implementations, Hashtable is synchronized. If a thread-safe implementation is not needed, it is recommended to use HashMap in place of Hashtable. If a thread-safe highly-concurrent implementation is desired, then it is recommended to use ConcurrentHashMap in place of Hashtable.
This example demonstrates Hashtable.
import java.util.Hashtable; import java.util.Map; public class SampleClass{ public static void main(String[] args){ Hashtable<Integer,String> ht = new Hashtable<>(); System.out.println("Add key-value pairs..."); ht.put(1,"Alligator"); ht.put(2,"Dog"); ht.put(3,"Spider"); ht.put(4,"Cat"); System.out.println("Loop through Hashtable..."); for(Map.Entry entry : ht.entrySet()) System.out.println(entry.getKey() + " | " + entry.getValue()); System.out.println("Get single element..."); //get() returns a mapped value from a key. //returns null if the specified key //can't be mapped String value = ht.get(3); System.out.println("value: " + value); System.out.println("Current size: " + ht.size()); System.out.println("Remove an element..."); ht.remove(3); System.out.println("New size: " + ht.size()); } } Result Add key-value pairs... Loop through Hashtable... 4 | Cat 3 | Spider 2 | Dog 1 | Alligator Get single element... value: Spider Current size: 4 Remove an element... New size: 3More information can be found in Hashtable documentation.
Converting Collections to Other Collections
We can convert a collection to another collection. For example, we can convert an ArrayList to LinkedList, ArrayDeque to ArrayList, etc. However, Some collections may not be necessary to be converted. For example, Converting HashMap to HashSet may not be necessary because HashSet is a Set implementation of HashMap and vice-versa. Also, Collection interface and Map interface have different types of data structure. So, conversion between these two may need several workarounds.
Conversion Between Collections of Collection Interface
Let's convert collections that are subinterfaces of Collection interface. Most collections of the Collection interface have constructors that accept a Collection type as an argument. Take a look at this example:
import java.util.ArrayList; import java.util.TreeSet; import java.util.LinkedList; public class SampleClass{ public static void main(String[] args){ ArrayList<String> ar = new ArrayList<>(); ar.add("Apple"); ar.add("Orange"); ar.add("Banana"); TreeSet<String> ts = new TreeSet<>(ar); LinkedList<String> ll = new LinkedList<>(ar); System.out.println("ArrayList Elements..."); for(String str : ar) System.out.println(str); System.out.println("TreeSet Elements..."); for(String str : ts) System.out.println(str); System.out.println("LinkedList Elements..."); for(String str : ll) System.out.println(str); } } Result ArrayList Elements... Apple Orange Banana TreeSet Elements... Apple Banana Orange LinkedList Elements... Apple Orange BananaSome Collections of the Collection interface don't have constructors that accept Collection type. PriorityQueue doesn't have a constructor that has a Collection type as parameter. Although, PriorityQueue has constructor that accepts SortedSet type. EnumSet doesn't have constructors but it has
copyOf(Collection<E> c)
method that creates an enum set initialized from the specified collection.
If we want to convert elements of existing collection to another existing collection, we use the
addAll()
method. Most collections have this method. This method appends elements of a collection to another collection.
import java.util.ArrayList; import java.util.TreeSet; import java.util.LinkedList; public class SampleClass{ public static void main(String[] args){ ArrayList<String> ar = new ArrayList<>(); ar.add("Apple"); ar.add("Orange"); ar.add("Banana"); TreeSet<String> ts = new TreeSet<>(); ts.add("Avocado"); ts.addAll(ar); LinkedList<String> ll = new LinkedList<>(); ll.add("Guava"); ll.addAll(ts); System.out.println("ArrayList Elements..."); for(String str : ar) System.out.println(str); System.out.println("TreeSet Elements..."); for(String str : ts) System.out.println(str); System.out.println("LinkedList Elements..."); for(String str : ll) System.out.println(str); } } Result ArrayList Elements... Apple Orange Banana TreeSet Elements... Apple Avocado Banana Orange LinkedList Elements... Guava Apple Avocado Banana OrangeMost collections have this form of addAll():
public boolean addAll(Collection<? extends E> c)
Some collections have this another form of addAll():
public boolean addAll(int index, Collection<? extends E> c)
This second form inserts all of the elements in the specified collection into the collection that calls this method, starting at the specified index.
Conversion Between Collections of Map Interface
Collections of Map interface have constructors that accept Map type as an argument. Take a look at this example:
import java.util.HashMap; import java.util.TreeMap; import java.util.Map; public class SampleClass{ public static void main(String[] args){ HashMap<Integer, String> hm = new HashMap<>(); hm.put(1, "Aa"); hm.put(2, "Bb"); hm.put(3, "Cc"); TreeMap<Integer, String> tm = new TreeMap<>(hm); System.out.println("HashMap key-value pairs"); for(Map.Entry me : hm.entrySet()) System.out.println(me.getKey() + " | " + me.getValue()); System.out.println("TreeMap key-value pairs"); for(Map.Entry me : tm.entrySet()) System.out.println(me.getKey() + " | " + me.getValue()); } } Result HashMap key-value pairs 1 | Aa 2 | Bb 3 | Cc Treemap key-value pairs 1 | Aa 2 | Bb 3 | CcIf we want to convert elements of existing map to another existing map, we use the
putAll()
method. Most, if not all, collections of Map interface have this method. This method appends elements of a map to another map.Method form:
putAll(Map<? extends K,? extends V> map)
import java.util.HashMap; import java.util.TreeMap; import java.util.Map; public class SampleClass{ public static void main(String[] args){ HashMap<Integer, String> hm = new HashMap<>(); hm.put(1, "Aa"); hm.put(2, "Bb"); hm.put(3, "Cc"); TreeMap<Integer, String> tm = new TreeMap<>(); tm.put(4, "Dd"); tm.put(5, "Ee"); tm.putAll(hm); System.out.println("HashMap key-value pairs"); for(Map.Entry me : hm.entrySet()) System.out.println(me.getKey() + " | " + me.getValue()); System.out.println("TreeMap key-value pairs"); for(Map.Entry me : tm.entrySet()) System.out.println(me.getKey() + " | " + me.getValue()); } } Result HashMap key-value pairs 1 | Aa 2 | Bb 3 | Cc TreeMap key-value pairs 1 | Aa 2 | Bb 3 | Cc 4 | Dd 5 | Ee
Conversion Between Collection and Map
Conversion between Collection and Map is kinda tricky. A map requires a key-value pair whereas collection of Collection interface requires only a single value. Let's start Converting a map to collection. There are multiple ways to do this. Let's start with separately converting the keys and values of a map to a collection.
import java.util.HashMap; import java.util.ArrayList; public class SampleClass{ public static void main(String[] args){ HashMap<Integer, String> hm = new HashMap<>(); hm.put(1,"Orange"); hm.put(2,"Apple"); hm.put(3,"Guava"); ArrayList<Integer> keys = new ArrayList<>(hm.keySet()); ArrayList<String> values = new ArrayList<>(hm.values()); for(int i = 0; i < keys.size(); i++) System.out.println(keys.get(i) + " | " + values.get(i)); } } Result 1 | Orange 2 | Apple 3 | GuavaMaps of Map interface have
keySet()
and values()
methods. keySet()
returns a Set that contains all of the keys of a map. values()
returns a Collection that contains all of the values of a map.
Another way of converting Map to Collection is to use the
entrySet()
method. This method is available to subclasses of the Map interface. This method returns a view of the map in a Set with Map.Entry element type.
import java.util.HashMap; import java.util.ArrayList; import java.util.Map; public class SampleClass{ public static void main(String[] args){ HashMap<Integer, String> hm = new HashMap<>(); hm.put(1,"Orange"); hm.put(2,"Apple"); hm.put(3,"Guava"); ArrayList<Map.Entry> ar = new ArrayList<>(hm.entrySet()); for(Map.Entry me : ar) System.out.println(me.getKey() + " | " + me.getValue()); } } Result 1 | Orange 2 | Apple 3 | GuavaIn the example above, we didn't need to separate the keys and values when we converted the map into Collection. Map.Entry doesn't have natural ordering. We need to create a custom Comparator or Comparable in order to sort the converted Map.Entry in a collection.
Next, let's convert a collection of Collection interface to Map. a collection of Collection interface has only single value. To convert it to Map, we need to decide if the elements of the collection is going to be keys or values or we need to strategize on how to get keys and values from a source.
import java.util.TreeMap; import java.util.stream.Stream; import java.util.stream.Collectors; import java.util.ArrayList; import java.util.Map; public class SampleClass{ public static void main(String[] args){ ArrayList<String> ar = new ArrayList<>(); ar.add("Orange"); ar.add("Avocado"); ar.add("Guava"); TreeMap<Integer, String> tm = new TreeMap<>(ar.stream().collect( Collectors.toMap( String::length, String::toString))); for(Map.Entry me : tm.entrySet()) System.out.println(me.getKey() + " | " + me.getValue()); } } Result 5 | Guava 6 | Orange 7 | AvocadoIn the example above, length of String elements are keys and the String themselves are values. We use Stream to create a stream pipeline and we use Collectors.toMap() to map key-value pairs and that will be passed to TreeMap.
Remember that Map doesn't accept duplicate keys. When converting between Collection and Map, we may pass a duplicate key during conversion. If that happens, java will throw
IllegalStateException
. To avoid passing duplicate keys during conversion, we can use the second form of Collectors.toMap().
import java.util.TreeMap; import java.util.stream.Stream; import java.util.stream.Collectors; import java.util.ArrayList; import java.util.Map; import java.util.function.Function; public class SampleClass{ public static void main(String[] args){ ArrayList<String> ar = new ArrayList<>(); ar.add("Orange"); ar.add("Avocado"); ar.add("Guava"); ar.add("Avocado"); TreeMap<String, Integer> tm = new TreeMap<>(ar.stream().collect( Collectors.toMap( Function.identity(), String::length, (key1,key2) -> key1))); for(Map.Entry me : tm.entrySet()) System.out.println(me.getKey() + " | " + me.getValue()); } } Result Avocado | 7 Guava | 5 Orange | 6In the example above,
toMap()
has three arguments. The first argument is a key that is passed by Function.identity(). This method returns the input argument of a Function. In this case, the input argument is the String elements in the stream pipeline.
Next argument is a value which is a length of a String element in the stream pipeline. The third argument is a BinaryOperator that is used to merge two keys. In the example above,
key2
is discarded and key1
is kept.
Unmodifiable Collection
Some collections can be set as unmodifiable. If a collection is unmodifiable, operations like add, remove and replace will throw
UnsupportedOperationException
. List, Set and Map have methods that can create unmodifiable instances. For List and Set, of()
and copyOf()
methods provide a convenient way to create unmodifiable List/Set. For Map, of()
, ofEntries()
and copyOf()
provide a convenient way to create unmodifiable Map.
Unmodifiable collection has characteristics that make it distinct from modifiable collection. Read the List, Set and Map documentation to know the characteristics of their unmodifiable collections.
This example demonstrates unmodifiable List.
import java.util.List; import java.util.ArrayList; public class SampleClass{ public static void main(String[] args){ ArrayList<String> ar = new ArrayList<>(); ar.add("Orange"); ar.add("Avocado"); ar.add("Guava"); List<String> list = List.copyOf(ar); //ar can be modified ar.add("Apple"); //list is unmodifiable. //Thus, we can't add more //elements to it. This //will throw //UnsupportedOperationException //if list calls add() //list.add("Apple"); System.out.println("ArrayList Elements..."); for(String str : ar) System.out.println(str); System.out.println("Unmodifiable List"+ " Elements..."); for(String str : list) System.out.println(str); } } Result ArrayList Elements... Orange Avocado Guava Apple Unmodifiable List Elements... Orange Avocado Guava
Unmodifiable View
Unmodifiable collections that are created by methods like
Collections.unmodifiableList()
, Collections.unmodifiableMap()
, etc. are called unmodifiable view collections. Unlike unmodifiable collection created by of()
, ofEntries()
and copyOf()
methods, the effects of modification that happened in the backing collection of a view are reflected to the view.
This example demonstrates unmodifiable view list.
import java.util.List; import java.util.ArrayList; import java.util.Collections; public class SampleClass{ public static void main(String[] args){ ArrayList<String> ar = new ArrayList<>(); ar.add("Orange"); ar.add("Avocado"); ar.add("Guava"); List<String> list = Collections.unmodifiableList(ar); //ar can be modified ar.add("Apple"); //list is unmodifiable. //Thus, we can't add more //elements to it. This //will throw //UnsupportedOperationException //if list calls add() //list.add("Apple"); System.out.println("ArrayList Elements..."); for(String str : ar) System.out.println(str); System.out.println("Unmodifiable List"+ " Elements..."); for(String str : list) System.out.println(str); } } Result ArrayList Elements... Orange Avocado Guava Apple Unmodifiable List Elements... Orange Avocado Guava AppleMore methods that create unmodifiable view collections can be found in the Collections class.
Multi-dimensional Collection
If arrays can be multi-dimensional, collections can also be multi-dimensional. To create a multi-dimensional collection, we put a collection as an element of another collection. This example demonstrates ArrayList in an ArrayList.
import java.util.ArrayList; import java.util.Arrays; public class SampleClass{ public static void main(String[] args){ ArrayList<ArrayList<Integer>> arr = new ArrayList<>(); arr.add(new ArrayList<> (Arrays.asList(new Integer[]{1,2,3,4,5}))); for(ArrayList<Integer> o : arr) for(Integer i : o) System.out.print(i + " "); } } Result 1 2 3 4 5Next, this example demonstrates HashMap in a LinkedHashMap.
import java.util.HashMap; import java.util.LinkedHashMap; import java.util.Map; public class SampleClass{ public static void main(String[] args){ LinkedHashMap<HashMap<String[], String[]>, String> host = new LinkedHashMap<>(); HashMap<String[], String[]> innerHash = new HashMap<>(); innerHash.put( new String[]{"Alt Hostname #1"}, new String[]{"C:\\Links"}); host.put(innerHash, "https://www.myHost.com"); for(Map.Entry me : host.entrySet()){ HashMap inner = null; Object key = me.getKey(); if(key instanceof HashMap) inner = (HashMap)key; else throw new NullPointerException("inner is null"); Object[] altName = inner.keySet().toArray(); Object[] dirPath = inner.values().toArray(); //If a collection has an array as element, //the arrays that we add to the collection are //stored in another array. Thus, creating //a two-dimensional array. System.out.println("Alternative Host Name/s"); for(int i = 0; i < altName.length; i++){ String[] names = (String[])altName[i]; for(String s : names) System.out.println(s); } System.out.println(); System.out.println("Text File Paths"); for(int i = 0; i < dirPath.length; i++){ String[] path = (String[])dirPath[i]; for(String s : path) System.out.println(s); } System.out.println(); System.out.println("URL"); System.out.println(me.getValue()); } } } Result Alternative Hostname/s Alt Hostname #1 Text File Paths C:\Links URL https://www.myHost.com
No comments:
Post a Comment