Writing intrusive containers for embedded systems involves creating data structures that store their own elements directly within the structure, rather than using pointers or separate allocations. This method is particularly useful in memory-constrained environments like embedded systems, as it reduces overhead and improves performance.
Intrusive containers can be implemented by embedding a link or pointer to other elements directly in the objects you plan to store in the container. Below is a simple example showing how to create an intrusive linked list in C++:
// Node class for intrusive linked list
class Node {
public:
int data;
Node* next;
Node(int d) : data(d), next(nullptr) {}
};
// Intrusive linked list class
class IntrusiveList {
private:
Node* head;
public:
IntrusiveList() : head(nullptr) {}
// Add node to the front of the list
void push_front(int data) {
Node* newNode = new Node(data);
newNode->next = head;
head = newNode;
}
// Display the list
void display() {
Node* current = head;
while (current) {
std::cout << current->data << " ";
current = current->next;
}
std::cout << std::endl;
}
// Destructor to free memory
~IntrusiveList() {
while (head) {
Node* temp = head;
head = head->next;
delete temp;
}
}
};
// Example usage
int main() {
IntrusiveList list;
list.push_front(10);
list.push_front(20);
list.push_front(30);
list.display(); // Output: 30 20 10
return 0;
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?