All Products
Search
Document Center

Tablestore:Get started with the Wide Column model

Last Updated:Feb 27, 2025

The Wide Column model is similar to the data model of Bigtable or HBase and is suitable for various scenarios such as the storage of metadata and big data. A single data table can store petabyte-level data and supports tens of millions of queries per second (QPS). This topic describes how to use the Tablestore CLI to get started with the Wide Column model.

Prerequisites

An instance is created. For more information, see Create an instance.

Procedure

Step 1: Configure information of the instance that you want to access

Run the config command to configure access information.

Before you run the command, replace the endpoint, instance name, AccessKey ID, and AccessKey secret in the command with the actual endpoint, instance name, AccessKey ID, and AccessKey secret.
config --endpoint https://myinstance.cn-hangzhou.ots.aliyuncs.com --instance myinstance --id NTSVL******************** --key 7NR2****************************************

Step 2: Create and use a data table

After you create a data table, you can select the data table so that you can perform subsequent table operations or data operations on the data table.

  1. Run the following command to create a data table named order:

    create -t order --pk '[{"c":"id","t":"string"}]'
  2. Run the following command to use the data table named order:

    use --wc -t order

For more information, see Operations on data tables.

Step 3: Perform data operations

You can write, update, read, delete, or export data based on your business requirements.

Write data

  • Insert a row of data.

    The following sample command shows how to insert a row of data into a data table:

    put --pk '["000000114d884ca1dbd6b9a58e8d0d94"]' --attr '[{"c":"pBrand","v":"brand1"},{"c":"pPrice","v":1599.0},{"c":"payTime","v":1509615334404,"isint":true},{"c":"totalPrice","v":2498.99},{"c":"sName","v":"Peter"},{"c":"pId","v":"p0003004"},{"c":"oId","v":"o0039248410"},{"c":"hasPaid","v":true},{"c":"sId","v":"s0015"},{"c":"orderTime","v":1509614885965,"isint":true},{"c":"pName","v":"brand1 type"},{"c":"cName","v":"Mary"},{"c":"pType","v":"Mobile phone"},{"c":"pCount","v":1,"isint":true},{"c":"cId","v":"c0018"}]'
  • Import data.

    Download the sample data package to your local device, decompress the package, and then run the import command to import the data in a batch.

    Note

    The sample data file contains a total of 1 million rows of order data. You can specify the number of rows that you want to import by using the import -l parameter.

    The following sample command shows how to import 50,000 rows of order data in the sample data file to the current table and use the current time as the timestamp. In the sample command, yourFilePath specifies the path where the sample data package is decompressed. Example: D:\\order_demo_data_1000000\\order_demo_data_1000000.

    import -i yourFilePath --ignore_version -l 50000

    The following result is returned:

    Current speed is: 15800 rows/s. Total succeed count 15800, failed count 0.
    Current speed is: 27400 rows/s. Total succeed count 43200, failed count 0.
    Import finished, total count is 50000, failed 0 rows.

Update data

The following sample command shows how to update the row whose primary key column value is 000000114d884ca1dbd6b9a58e8d0d94. Data is inserted regardless of whether the row exists. If the row exists, the inserted data overwrites the existing data.

update --pk '["000000114d884ca1dbd6b9a58e8d0d94"]' --attr '[{"c":"pBrand","v":"brand2"},{"c":"pPrice","v":1599.0},{"c":"payTime","v":1509615334404,"isint":true},{"c":"totalPrice","v":2498.99},{"c":"sName","v":"Peter"},{"c":"pId","v":"p0003004"},{"c":"oId","v":"o0039248410"},{"c":"hasPaid","v":true},{"c":"sId","v":"s0015"},{"c":"orderTime","v":1509614885965,"isint":true},{"c":"pName","v":"brand2 type"},{"c":"cName","v":"Mary"},{"c":"pType","v":"Mobile phone"},{"c":"pCount","v":1,"isint":true},{"c":"cId","v":"c0018"}]'  --condition ignore

Read data

You can execute SQL statements to query and analyze data in the table. For more information, see SQL query.
  • Read a row of data.

    The following sample command shows how to read the row whose primary key column value is 000000114d884ca1dbd6b9a58e8d0d94:

    get --pk '["000000114d884ca1dbd6b9a58e8d0d94"]'

    The following result is returned:

    +----------------------------------+-------+--------+---------+-------------+---------------+--------+--------+----------+-------------+--------+-------+---------------+-------+--------+------------+
    | id                               | cId   | cName  | hasPaid | oId         | orderTime     | pBrand | pCount | pId      | pName       | pPrice | pType | payTime       | sId   | sName  | totalPrice |
    +----------------------------------+-------+--------+---------+-------------+---------------+--------+--------+----------+-------------+--------+-------+---------------+-------+--------+------------+
    | 000000114d884ca1dbd6b9a58e8d0d94 | c0018 | Mary | true    | o0039248410 | 1509614885965 | brand1 | 1      | p0003004 | brand1 type | 1599   | Mobile phone  | 1509615334404 | s0015 | Peter | 2498.99    |
    +----------------------------------+-------+--------+---------+-------------+---------------+--------+--------+----------+-------------+--------+-------+---------------+-------+--------+------------+
  • Scan data.

    The following sample command shows how to scan up to 10 rows of data in a data table:

    scan --limit 10

Delete data

The following sample command shows how to delete the row whose primary key column value is 000000114d884ca1dbd6b9a58e8d0d94:

delete --pk '["000000114d884ca1dbd6b9a58e8d0d94"]'

Export data

You can export data from the data table to a local JSON file.

The following sample command shows how to export data from the pId, oId, and cName columns of the current table to the local file mydata.json:

scan -o /tmp/mydata.json -c pId,oId,cName

For more information, see Operations on data.

References

You can use secondary indexes or search indexes to accelerate data queries. For more information, see Secondary index and Search index.