Python provides a built-in module called `csv` that makes it easy to read from and write to CSV files. The `csv` module can handle both reading and writing in various dialects and formats.
To read a CSV file, you can use the `csv.reader()` function, which returns an object that can iterate over lines in the specified CSV file.
import csv
with open('example.csv', mode ='r')as file:
csvFile = csv.reader(file)
for lines in csvFile:
print(lines)
You can write to a CSV file using the `csv.writer()` function. This function allows you to write rows of data to your CSV file.
import csv
with open('example.csv', mode ='w', newline='') as file:
csvWriter = csv.writer(file)
csvWriter.writerow(['Name', 'Age', 'City'])
csvWriter.writerow(['Alice', '30', 'New York'])
csvWriter.writerow(['Bob', '25', 'Los Angeles'])
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?