- 计算
- 网络
- 存储与CDN
-
数据库
-
云数据库 RDS MySQL
- 产品概述
- 产品定价
- 快速入门
- 操作手册
- 案例实践
- API文档
-
常见问题
- 如何访问MySQL实例?
- MySQL实例的安全性如何?
- 如何向MySQL实例中导入数据?
- 如何向MySQL实例中导出数据?
- 如何创建新用户并授予权限?
- QPS是如何统计的?
- 什么是内存溢出?
- 默认的最大连接数是多少?
- 如何查看数据库运行状态?
- 如何查看MySQL实例的SlowLog?
- 如何修改MySQL实例的配置参数?
- 如何安装和卸载插件?
- 如何使用MySQL-Proxy使MySQL实例可以通过外网访问?
- 何查看MySQL实例的各项监控指标?
- 是否可以查看云数据库运行状态?
- 默认的配置是针对哪种存储引擎优化的?
- 如何在云主机上搭建云数据库从库并进行主从同步呢?
- 如何正确设置字符集?
- 如何查询MySQL实例的客户端和服务器端版本
- 相关协议
- 云数据库 RDS PostgreSQL
- 云数据库 Redis
- 云数据库 MongoDB
- 分布式数据库 InDDB
- 云数据库 Memcache
-
云数据库 RDS MySQL
- 安全
- 人工智能
-
大数据
- ES服务 Elasticsearch
- 数据仓库 DW
- 托管Hadoop
- 管理和监控
-
API
-
对象存储OSS
- 创建Bucket-CreateBucket
- 获取Bucket信息-DescribeBucket
- 更改Bucket属性-UpdateBucket
- 删除Bucket-DeleteBucket
- 前缀列表查询 – PrefixFileList
- 上传文件 – PutFile
- 表单上传 – PostFile
- 秒传文件-UploadHit
- 下载文件-GetFile
- 查询文件基本信息-HEADFile
- 删除文件 – DeleteFile
- 初始化分片 – InitiateMultipartUpload
- 上传分片 – UploadPart
- 完成分片 – FinishMultipartUpload
- 放弃分片 – AbortMultipartUpload
- 查看配额状态-GetUFileQuota
- 查询配额支付价格-GetUFileQuotaPrice
- 查看配额使用报表-GetUFileReport
- 获取配额信息-GetUFileQuotaInfo
- 获取已上传成功的分片列表-GetMultiUploadPart
- 更新令牌-UpdateUFileToken
- 删除令牌-DeleteUFileToken
- 获取令牌信息-DescribeUFileToken
- OSS 错误码列表
- 操作文件的Meta信息 – OpMeta
- API文档综述
-
弹性公网IP EIP
- 1、申请弹性IP-AllocateEIP
- 2、获取弹性IP信息-DescribeEIP
- 3、更新弹性IP属性-UpdateEIPAttribute
- 4、释放弹性IP-ReleaseEIP
- 5、绑定弹性IP-BindEIP
- 6、解绑弹性IP-UnBindEIP
- 7、调整弹性IP带宽-ModifyEIPBandwidth
- 8. 修改弹性IP出口权重-ModifyEIPWeight
- 9. 获取弹性IP价格-GetEIPPrice
- 10. 获取弹性IP带宽改动价格-GetEIPUpgradePrice
- 11. 获取弹性IP计费方式-GetEIPPayMode
- 12. 设置弹性IP计费方式-SetEIPPayMode
- 13. 申请内网虚拟IP-AllocateVIP
- 14. 获取内网虚拟IP信息-DescribeVIP
- 15. 释放内网虚拟IP- ReleaseVIP
- 16. 创建带宽包-CreateBandwidthPackage
- 17. 获取带宽包信息-DescribeBandwidthPackage
- 18. 删除带宽包-DeleteBandwidthPackage
- 19. 开通共享带宽-AllocateShareBandwidth
- 20. 获取共享带宽信息-DescribeShareBandwidth
- 21. 调整共享带宽-ResizeShareBandwidth
- 22. 关闭共享带宽-ReleaseShareBandwidth
- 23. 将EIP加入共享带宽-AssociateEIPWithShareBandwidth
- 24. 将EIP移出共享带宽-DisassociateEIPWithShareBandwidth
- 25. 获取带宽用量-DescribeBandwidthUsage
- 26. 更新防火墙属性-UpdateFirewallAttribute
- 27. 获取防火墙信息-DescribeFirewall
- 28. 应用防火墙-GrantFirewall
- 29. 错误码
-
云服务器ECS
- 1、获取VNC登录信息-GetUHostInstanceVncInfo
- 2、启动云服务器-StartUHostInstance
- 3、重启云服务器-RebootUHostInstance
- 4、关闭云服务器-StopUHostInstance
- 5、获取云服务器业务组列表-DescribeUHostTags
- 6、字段规范
- 7、删除云服务器-TerminateUHostInstance
- 8、重置云服务器密码-ResetUHostInstancePassword
- 9、修改云服务器业务组-ModifyUHostInstanceTag
- 10、修改云服务器名-ModifyUHostInstanceName
- 11、获取挂载磁盘的升级价格-GetAttachedDiskUpgradePrice
- 12、修改云服务器配置-ResizeUHostInstance
- 13、获取升级配置价格-GetUHostUpgradePrice
- 14、创建云服务器-CreateUHostInstance
- 15、移除硬件隔离组-LeaveIsolationGroup
- 16、创建硬件隔离组-CreateIsolationGroup
- 17、删除自制镜像-TerminateCustomImage
- 18、创建自制镜像-CreateCustomImage
- 19、导入镜像-ImportCustomImage
- 20、修改云服务器备注-ModifyUHostInstanceRemark
- 21、修改挂载的磁盘大小-ResizeAttachedDisk
- 22、模拟服务器掉电-PoweroffUHostInstance
- 23、重装系统-ReinstallUHostInstance
- 24、获取镜像列表-DescribeImage
- 25、获取云服务器价格-GetUHostInstancePrice
- 26、获取云服务器信息-DescribeUHostInstance
- 27、普通机型开启CDP-UpgradeToArkUHostInstance
-
对象存储OSS
- 用户提醒
- 服务等级协议(SLA)
- 企业上云常见问题
- 其他协议
- 云市场
- 开发者
- 账户管理
-
1. 使用JAVA连接HiveServer2(实现创建表格、加载数据,展示数据操作)
此示例需要您先登陆托管Hadoop集群master1节点,以下操作默认在master1节点执行
org.apache.hive.jdbc.HiveDriver是hiveserver2的dirvername,hiveserver2的访问地址是"jdbc:hive2://ip:10000/default"。
● 编写示例代码
示例代码Hive2JdbcClient.java如下:
import java.sql.SQLException;import java.sql.Connection;import java.sql.ResultSet;import java.sql.Statement;import java.sql.DriverManager;public class Hive2JdbcClient {private static String driverName = "org.apache.hive.jdbc.HiveDriver";/*** @param args* @throws SQLException*/public static void main(String[] args) throws SQLException {try {Class.forName(driverName);} catch (ClassNotFoundException e) {// TODO Auto-generated catch blocke.printStackTrace();System.exit(1);}//replace "hive" here with the name of the user the queries should run asConnection con = DriverManager.getConnection("jdbc:hive2://uhadoop-******-master2:10000/default", "", "");Statement stmt = con.createStatement();String tableName = "testHive2DriverTable";stmt.execute("drop table if exists " + tableName);stmt.execute("create table " + tableName + " (key int, value string)");// show tablesString sql = "show tables '" + tableName + "'";System.out.println("Running: " + sql);ResultSet res = stmt.executeQuery(sql);if (res.next()) {System.out.println(res.getString(1));}// describe tablesql = "describe " + tableName;System.out.println("Running: " + sql);res = stmt.executeQuery(sql);while (res.next()) {System.out.println(res.getString(1) + "\t" + res.getString(2));}// load data into table// NOTE: filepath has to be local to the hive server// NOTE: /tmp/a.txt is a ctrl-A separated file with two fields per lineString filepath = "/user/hive/warehouse/b.txt";sql = "load data inpath '" + filepath + "' into table " + tableName;System.out.println("Running: " + sql);stmt.execute(sql);// select * querysql = "select * from " + tableName;System.out.println("Running: " + sql);res = stmt.executeQuery(sql);while (res.next()) {System.out.println(String.valueOf(res.getInt(1)) + "\t" + res.getString(2));}// regular hive querysql = "select count(1) from " + tableName;System.out.println("Running: " + sql);res = stmt.executeQuery(sql);while (res.next()) {System.out.println(res.getString(1));}}}注解:
Connection con = DriverManager.getConnection("jdbc:hive2://uhadoop-******-master2:10000/default", "", "");
uhadoop-******-master2须改成您集群master2节点的主机名称或者IP。
- 编译
javac Hive2JdbcClient.java ● 执行程序
test.sh代码如下:
#!/bin/bashhdfs dfs -rm /user/hive/warehouse/b.txtecho -e '1\x01foo' > /tmp/b.txtecho -e '2\x01bar' >> /tmp/b.txthdfs dfs -put /tmp/b.txt /user/hive/warehouse/HADOOP_HOME=/home/hadoop/CLASSPATH=.:$HIVE_HOME/conffor i in ${HADOOP_HOME}/share/hadoop/mapreduce/lib/hadoop-*.jar ; doCLASSPATH=$CLASSPATH:$idonefor i in ${HADOOP_HOME}/share/hadoop/mapreduce/hadoop-*.jar ; doCLASSPATH=$CLASSPATH:$iDonefor i in ${HADOOP_HOME}/share/hadoop/common/lib/hadoop-*.jar ; doCLASSPATH=$CLASSPATH:$idonefor i in ${HADOOP_HOME}/share/hadoop/common/hadoop-*.jar ; doCLASSPATH=$CLASSPATH:$idonefor i in ${HIVE_HOME}/lib/*.jar ; doCLASSPATH=$CLASSPATH:$idonejava -cp $CLASSPATH Hive2JdbcClient2. 使用Python连接HiveServer2(实现创建表格、加载数据,展示数据操作)
Hiveserver2使用python客户端的过程如下:
● 下载pyhs2 git clone :
https://github.com/BradRuderman/pyhs2.git
● 安装依赖:yum install gcc-c++ cyrus-sasl-* python-devel
● 安装setuptools:
wget -q http://peak.telecommunity.com/dist/ez_setup.py ./python ez_setup.py
如果上面方式安装失败需要手动下载setuptools-0.6c11.tar.gz安装包安装
- 编译安装pyhs2
进入pyhs2目录并安装
cd pyhs2python setup.py buildpython setup.py install编写示例代码
示例代码,即pyhs2下example.py
import pyhs2with pyhs2.connect(host='uhadoop-******-master2',port=10000,authMechanism="PLAIN",user='root',password='test',database='default') as conn:with conn.cursor() as cur:#Show databasesprint cur.getDatabases()#Execute querycur.execute("select * from test_hive")#Return column info from queryprint cur.getSchema()#Fetch table resultsfor i in cur.fetch():print i3. Hive外表读取HBase数据
通过在Hive中创建HBase外表,可利用简单的sql语句分析HBase的非结构化数据
打开HBase shell,创建t1表
create 't1',{NAME => 'f1',VERSIONS => 2}put 't1','rowkey001','f1:col1','value01'put 't1','rowkey001','f1:col2','value02'put 't1','rowkey001','f1:colf','value03'scan 't1'得到的t1表结构如下:
hbase(main):013:0> scan 't1'ROW COLUMN+CELLrowkey001 column=f1:col1,timestamp=1481075364575, value=value01rowkey001 column=f1:col2,timestamp=1481075364607, value=value02rowkey001 column=f1:colf,timestamp=1481075364641, value=value03打开Hive Cli,创建外表:
hive> CREATE EXTERNAL TABLE t_hive_hbase(> rowkey string,> cf map<STRING,STRING>> )> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'> WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,f1:")> TBLPROPERTIES ("hbase.table.name" = "t1");使用sql语句读取hbase数据,结果如下:
hive> select * from t_hive_hbase;OKrowkey001 {"col1":"value01","col2":"value02","colf":"value03"}