Reading and writing compressed files in Python can be easily handled using the built-in libraries. Below are examples for gzip, bz2, and xz formats.
# Example: Reading and writing Gzip compressed files
import gzip
# Writing to a gzip file
with gzip.open('example.txt.gz', 'wt') as f:
f.write('This is an example of writing to a gzip file.\n')
# Reading from a gzip file
with gzip.open('example.txt.gz', 'rt') as f:
content = f.read()
print(content)
# Example: Reading and writing Bz2 compressed files
import bz2
# Writing to a bz2 file
with bz2.open('example.txt.bz2', 'wt') as f:
f.write('This is an example of writing to a bz2 file.\n')
# Reading from a bz2 file
with bz2.open('example.txt.bz2', 'rt') as f:
content = f.read()
print(content)
# Example: Reading and writing Xz compressed files
import lzma
# Writing to an xz file
with lzma.open('example.txt.xz', 'wt') as f:
f.write('This is an example of writing to an xz file.\n')
# Reading from an xz file
with lzma.open('example.txt.xz', 'rt') as f:
content = f.read()
print(content)
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?