Using JSONB columns in MySQL can be quite effective for managing data that has varying structures. In Go, the database/sql package can be utilized to interact with MySQL databases that have JSONB fields. This allows developers to work with JSON data seamlessly.
Here's a simple example of how you can use JSONB columns in Go with MySQL:
package main
import (
"database/sql"
"encoding/json"
"fmt"
_ "github.com/go-sql-driver/mysql"
)
type User struct {
ID int `json:"id"`
Data map[string]interface{} `json:"data"`
}
func main() {
// Set up the database connection
db, err := sql.Open("mysql", "user:password@tcp(127.0.0.1:3306)/your_database")
if err != nil {
panic(err)
}
defer db.Close()
// Create a sample JSON object
userData := map[string]interface{}{
"name": "John Doe",
"age": 30,
"address": "123 Main St, Anytown, USA",
}
// Marshal the JSON object
jsonData, err := json.Marshal(userData)
if err != nil {
panic(err)
}
// Insert data into the database
_, err = db.Exec("INSERT INTO users (data) VALUES (?)", jsonData)
if err != nil {
panic(err)
}
// Querying data from the database
rows, err := db.Query("SELECT id, data FROM users")
if err != nil {
panic(err)
}
defer rows.Close()
for rows.Next() {
var user User
var jsonData []byte
if err := rows.Scan(&user.ID, &jsonData); err != nil {
panic(err)
}
json.Unmarshal(jsonData, &user.Data)
fmt.Printf("User ID: %d, Data: %+v\n", user.ID, user.Data)
}
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?