Python provides various ways to interact with databases, including libraries and frameworks that facilitate database connections, querying, and handling data efficiently. One of the most popular libraries for database interaction in Python is SQLite, but Python can also connect to other databases like MySQL, PostgreSQL, and Oracle.
import sqlite3
# Connect to a database (or create it if it doesn't exist)
connection = sqlite3.connect('example.db')
# Create a cursor object to execute SQL commands
cursor = connection.cursor()
# Create a new table
cursor.execute('''CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY, name TEXT, age INTEGER)''')
# Insert a new user into the table
cursor.execute('''INSERT INTO users (name, age) VALUES (?, ?)''', ('Alice', 30))
# Commit the transaction
connection.commit()
# Query the table
cursor.execute('''SELECT * FROM users''')
# Fetch and print all results
results = cursor.fetchall()
for row in results:
print(row)
# Close the connection
connection.close()
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?