In C++, `std::array` is a fixed-size container that provides better performance than other dynamic containers like `std::vector` when it comes to accessing elements. However, unlike `std::vector`, `std::array` does not provide methods for inserting or erasing elements directly since its size is fixed at compile-time.
To insert or erase elements efficiently in a `std::array`, you'd typically need to create a new array, copy the elements over, and adjust the size. Below is an example demonstrating how to do this.
#include <iostream>
#include <array>
int main() {
std::array arr = {1, 2, 3, 4, 5};
// To insert an element (e.g., 10) at index 2
std::array newArr; // New array with one more size
for (size_t i = 0; i < 2; ++i) {
newArr[i] = arr[i];
}
newArr[2] = 10; // Insert element
for (size_t i = 2; i < 5; ++i) {
newArr[i + 1] = arr[i];
}
// To erase an element (e.g., the element at index 1)
std::array arrAfterErase;
for (size_t i = 0, j = 0; i < 6; ++i) {
if (i != 1) { // Skip index 1
arrAfterErase[j++] = newArr[i];
}
}
// Display the new array
for (int num : arrAfterErase) {
std::cout << num << " ";
}
return 0;
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?