Memory-mapped files are a powerful feature in C++ that allow applications to map files or devices into memory. This allows applications to access files as if they were part of the virtual memory, which can improve performance for large files by allowing for easier read/write operations.
#include
#include
#include
#include
#include
int main() {
const char *filePath = "example.txt";
int fd = open(filePath, O_RDWR | O_CREAT, S_IRUSR | S_IWUSR);
if (fd == -1) {
std::cerr << "Error opening file!" << std::endl;
return 1;
}
// Writing data to file
const char *data = "Hello, Memory-Mapped Files!";
write(fd, data, strlen(data));
// Memory-mapping the file
size_t fileSize = strlen(data);
char *mapped = (char *)mmap(nullptr, fileSize, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
if (mapped == MAP_FAILED) {
std::cerr << "Mapping failed!" << std::endl;
return 1;
}
// Reading from memory-mapped file
std::cout << "Mapped Data: " << mapped << std::endl;
// Clean up
munmap(mapped, fileSize);
close(fd);
return 0;
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?