-
Overview
-
Q & A
-
Practical sheets
-
- Add a NetBackup User
- Backing up a VM
- Backup : Agent-Level B&R via NSS for IAAS offer
- Checking Backup Prerequisites
- Daily Reporting
- Deleting a VM Backup
- Identifying Your Backup Master Server
- NetBackup Portal
- NSS Home Page
- NSS Swagger APIs
- Restoring a VM
- Troubleshooting Netbackup Errors
- VM Agent Restoration Mode
-
-
- Aucun article
-
-
- Aucun article
-
-
-
- Backup : Agent-Level B&R via NSS for IAAS offer
- Backup : Create VCOD Backup
- Backup : Netbackup Agent Installation for Linux
- Backup : Netbackup Agent Installation for Windows
- Backup : Overall Design for VCOD Offer
- Backup : User's Guide for VCOD Offer
- NSX-T : Configuring a Distributed Firewall
- NSX-T : Create VPN Ipsec
- NSX-T : Creation of T1
- NSX-T : DNAT configuration
- NSX-T : How to configure a Gateway Firewall
- NSX-T : SNAT configuration
- NSX-T: Create and Configure a Geneve Overlay Segment
- NSX-T: How to configure an IPSEC solution
- vCenter : Clone a VM
- VCenter : Create a new VM
- VCenter : Create a snapshot of a VM
- VCenter : Reset cloudadmin password
- VCenter : Storage Vmotion on a VM
- VCenter : Upgrade Vmware tools on a VM
-
-
Services
-
-
- Aucun article
-
- Aucun article
-
- Aucun article
-
- Aucun article
-
- Aucun article
-
- Aucun article
-
- Aucun article
-
- Aucun article
-
-
-
- Aucun article
-
-
- Aucun article
-
- Aucun article
-
- Aucun article
Using the Object Storage Service – AWS CLI
Please note !
You must have the following connection information for your object storage account readily available:
- Access URL (HTTPS URL endpoint of the web service)
- Access key
- Secret key
This information is provided to you when your storage account is created.
Presentation
The AWS CLI (Command Line Interface) is free software (Apache 2.0 license) developed by Amazon Web Services to enable the use of AWS services through commands in a terminal (Linux shell, Windows command prompt, or macOS Terminal). This software allows managing storage as well as identities and permissions associated with them.
High-level commands (simpler) help avoid the complexity of the S3 API when manipulating buckets and objects.
Configuration
Prerequisite: Have the latest version of AWS CLI installed (on Linux, Windows, and macOS). Refer to the AWS documentation: What is the AWS Command Line Interface? – AWS Command Line Interface
Minimal configuration procedure:
- Open a Linux shell or Windows command window
- Run the command
aws configureand enter:- Your AWS Access Key ID (access key)
- Your AWS Secret Access Key (secret key)
EUfor Default region name- Simply press Enter to accept the default output format
Note: Using the EU region does not mean your data is stored outside France; Cloud Avenue data is indeed stored in mainland France.
Usage examples
In the following examples, url represents the hostname of your service endpoint accessible via HTTPS (see your connection information).
The examples use “high-level” commands that hide the complexity of the S3 interface and correspond to simple use cases (managing buckets and objects).
Listing Buckets:
$ aws --endpoint-url https://url s3 ls
2020-03-09 17:19:12 comp1
2020-03-09 17:19:15 comp2
In the example, there are two buckets (comp1 and comp2).
Listing Bucket contents:
$ aws --endpoint-url https://url s3 ls s3://comp1
PRE backup/
2020-03-09 17:23:34 34394 fic2.txt
2020-03-09 17:23:34 110 fichier1.txt
2020-03-09 17:23:34 34235 fichierB2.txt
In the example, the bucket comp1 contains three files (fic2.txt, fichier1.txt, and fichierB2.txt) and a folder (or prefix) named backup. The contents of the folder are not listed here.
Recursively listing Bucket contents:
$ aws --endpoint-url https://url s3 ls s3://comp1 --recursive
2020-03-10 11:28:57 13548 backup/archive1.tgz
2020-03-10 11:28:12 34394 fic2.txt
2020-03-10 11:28:12 110 fichier1.txt
2020-03-10 11:28:13 34235 fichierB2.txt
$ aws --endpoint-url https://url s3 ls s3://comp1 --recursive --human-readable
2020-03-10 11:28:57 13.2 KiB backup/archive1.tgz
2020-03-10 11:28:12 33.6 KiB fic2.txt
2020-03-10 11:28:12 110 Bytes fichier1.txt
2020-03-10 11:28:13 33.4 KiB fichierB2.txt
$ aws --endpoint-url https://url s3 ls s3://comp1 --recursive --human-readable --summarize
2020-03-10 11:28:57 13.2 KiB backup/archive1.tgz
2020-03-10 11:28:12 33.6 KiB fic2.txt
2020-03-10 11:28:12 110 Bytes fichier1.txt
2020-03-10 11:28:13 33.4 KiB fichierB2.txt
Total Objects: 5
Total Size: 93.6 KiB
In the example, the Bucket comp1 contains 4 files, one of which is inside a folder named backup. The --recursive option allows listing all objects, including traversing folders (or prefixes).
The other two options may improve readability by displaying the size units of objects, the number of objects, and the total size.
Creating a Bucket
$ aws --endpoint-url https://url s3 mb s3://comp3
make_bucket: comp3
$ aws --endpoint-url https://url s3 ls
2020-03-09 17:19:12 comp1
2020-03-09 17:19:15 comp2
2020-03-10 11:07:25 comp3
In the example, the empty Bucket comp3 is deleted.
Deleting a Bucket containing at least one object :
$ aws --endpoint-url https://url s3 rb s3://comp1
remove_bucket failed: s3://comp1 An error occurred (BucketNotEmpty) when calling the DeleteBucket operation
The bucket you tried to delete is not empty.
$ aws --endpoint-url https://url s3 rb s3://comp1 --force
delete: s3://comp1/backup/archive1.tgz
delete: s3://comp1/fichierB2.txt
delete: s3://comp1/fichier1.txt
delete: s3://comp1/fic2.txt
remove_bucket: comp1
In the example, the comp1 Bucket is not empty; the –force option is used to automatically delete all objects and folders present.
Note : the –force option does not work if version management is enabled on the Bucket.
Manipulating objects
Copying an object to a Bucket:
$ aws --endpoint-url https://url s3 cp ./fic2.txt s3://comp2
upload: ./fic2.txt to s3://comp2/fic2.txt
$ aws --endpoint-url https://url s3 ls s3://comp2
2020-03-10 12:15:17 34394 fic2.txt
In the example, the local file is copied (uploaded) to the comp2 Bucket.
Copying an object to a folder :
$ aws --endpoint-url https://url s3 cp ./fichierB2.txt s3://comp2/divers/
upload: ./fichierB2.txt to s3://comp2/divers/fichierB2.txt
$ aws --endpoint-url https://url s3 ls s3://comp2 --recursive
2020-03-10 12:23:05 34235 divers/fichierB2.txt
2020-03-10 12:15:17 34394 fic2.txt
In the example, the miscellaneous folder is created if it does not already exist in the Bucket, then the local file is copied (uploaded) to the folder (or prefix).
Renaming an object :
$ aws --endpoint-url https://url s3 mv s3://comp2/fic2.txt s3://comp2/fic2bis.txt
move: s3://comp2/fic2.txt to s3://comp2/fic2bis.txt
$ aws --endpoint-url https://url s3 ls s3://comp2 --recursive
2020-03-10 12:23:05 34235 divers/fichierB2.txt
2020-03-10 12:28:24 34394 fic2bis.txt
In the example, the file fic2.txt is renamed fic2bis.txt.
Moving an object
$ aws --endpoint-url https://url s3 mv s3://comp2/fic2bis.txt s3://comp2/old/
move: s3://comp2/fic2bis.txt to s3://comp2/old/fic2bis.txt
$ aws --endpoint-url https://url s3 ls s3://comp2 --recursive
2020-03-10 12:23:05 34235 divers/fichierB2.txt
2020-03-10 13:34:41 34394 old/fic2bis.txt
In the example, the folder (or prefix) old is created, then the file fic2bis.txt is moved there.
Moving an object by renaming it :
$ aws --endpoint-url https://url s3 mv s3://comp2/divers/fichierB2.txt s3://comp2/old/fichierB2.bak --recursive
$ aws --endpoint-url https://url s3 ls s3://comp2 --recursive
2020-03-10 12:23:05 34235 divers/fichierB2.txt
2020-03-10 13:34:41 34394 old/fic2bis.txt
In the example, the fileB2.txt object located in the miscellaneous folder is renamed and moved to the old folder (or prefix) as fileB2.bak
Deleting an object :
$ aws --endpoint-url https://url s3 rm s3://comp2/divers/fichierB2.txt
delete: s3://comp2/divers/fichierB2.txt
$ aws --endpoint-url https://url s3 ls s3://comp2 --recursive
2020-03-10 13:34:41 34394 old/fic2bis.txt
In the example, the fileB2.txt object is deleted; the miscellaneous folder (or prefix) is empty; it is automatically deleted.
Deleting a folder or a prefix
$ aws --endpoint-url https://url s3 rm s3://comp2/old --recursive
delete: s3://comp2/old/fichierB2.txt
delete: s3://comp2/old/fichier1.txt
delete: s3://comp2/old/fic2bis.txt
delete: s3://comp2/old/archive1.tgz
delete: s3://comp2/old/fic2.txt
$ aws --endpoint-url https://url s3 ls s3://comp2 –recursive
In the example, the deletion concerns the old folder; the –recursive option causes all objects present to be deleted (in this case, the Bucket is empty).