王十三 发布的文章

参考文档

http://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html

编辑my.cnf
[mysqld] 下面增加

sql_mode='ONLY_FULL_GROUP_BY,NO_AUTO_VALUE_ON_ZERO,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION,PIPES_AS_CONCAT,ANSI_QUOTES'

运行 MYSQL :

/etc/init.d/mysqld start

mysql -uroot -p

JSON查询方法:

JSON_EXTRACT(json_doc, path[, path] ...

普通查询
mysql> SELECT JSON_EXTRACT('{"a":1,"b":0}','$.a') AS json ;
+------+
| json |
+------+
| 1    |
+------+
1 row in set (0.00 sec)
数组查询
mysql> SELECT JSON_EXTRACT('[10,20,30]','$[*]') AS json ;
+--------------+
| json         |
+--------------+
| [10, 20, 30] |
+--------------+
1 row in set (0.00 sec)
查询数组 键值 内容
mysql> SELECT JSON_EXTRACT('{"a":1,"b":0,"c":[10,20,30]}','$.c[1]') AS json ;
+------+
| json |
+------+
| 20   |
+------+
1 row in set (0.00 sec)

JSON_SEARCH(json_doc, one_or_all, search_str[, escape_char[, path] ...])

one_or_all : one or all

PS:适用于搜索数组


搜索数组
SELECT JSON_SEARCH('["1","0",["10","20","30"]]','one','1')  AS json ; 

结合JSON_EXTRACT
SELECT JSON_SEARCH(JSON_EXTRACT('{"a":1,"b":0,"c":["10","20","30"]}','$.c'),'all','10') AS json ;

http://www.innomysql.net/article/21730.html

整理了一份最新基于MySQL 5.6和5.7的配置文件模板,
基本上可以说覆盖90%的调优选项,用户只需根据自己的服务器配置稍作修改即可,如InnoDB缓冲池的大小、IO能力(innodb_buffer_pool_size,innodb_io_capacity)。特别注意,这份配置文件不用修改,可以直接运行在MySQL5.6和5.7的版本下,
这里使用了小小的技巧,具体可看配置文件。如果配置参数存在问题,也可以及时反馈,我们一起成长。

触发做这件事情的原因是大部分网络上的MySQL配置文件都非常非常古老,大多都是基于MySQL 5.1的版本,这导致了绝大部分MySQL并没有运行在最优的环境,从而导致一些错误的使用,亦或是灾难性事故的发生,比如数据丢失,主从数据不一致等。而这些问题早在5.6版本及以后的版本中得到了解决。

最后,抛弃你那所谓的、陈旧的、错误的MySQL配置文件,面向一个崭新的高性能、高可靠、高可扩展MySQL时代,你要做的就是下载这份配置文件并用于你的生产环境。配置如下:

[client]
user=davi
password=88888888
[mysqld]
########basic settings########
server-id = 11
port = 3306
user = mysql
bind_address = 10.166.224.32
autocommit = 0
character_set_server=utf8mb4
skip_name_resolve = 1
max_connections = 800
max_connect_errors = 1000
datadir = /data/mysql_data
transaction_isolation = READ-COMMITTED
explicit_defaults_for_timestamp = 1
join_buffer_size = 134217728
tmp_table_size = 67108864
tmpdir = /tmp
max_allowed_packet = 16777216
sql_mode = "STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER"
interactive_timeout = 1800
wait_timeout = 1800
read_buffer_size = 16777216
read_rnd_buffer_size = 33554432
sort_buffer_size = 33554432
########log settings########
log_error = error.log
slow_query_log = 1
slow_query_log_file = slow.log
log_queries_not_using_indexes = 1
log_slow_admin_statements = 1
log_slow_slave_statements = 1
log_throttle_queries_not_using_indexes = 10
expire_logs_days = 90
long_query_time = 2
min_examined_row_limit = 100
########replication settings########
master_info_repository = TABLE
relay_log_info_repository = TABLE
log_bin = bin.log
sync_binlog = 1
gtid_mode = on
enforce_gtid_consistency = 1
log_slave_updates
binlog_format = row
relay_log = relay.log
relay_log_recovery = 1
binlog_gtid_simple_recovery = 1
slave_skip_errors = ddl_exist_errors
########innodb settings########
innodb_page_size = 8192
innodb_buffer_pool_size = 6G
innodb_buffer_pool_instances = 8
innodb_buffer_pool_load_at_startup = 1
innodb_buffer_pool_dump_at_shutdown = 1
innodb_lru_scan_depth = 2000
innodb_lock_wait_timeout = 5
innodb_io_capacity = 4000
innodb_io_capacity_max = 8000
innodb_flush_method = O_DIRECT
innodb_file_format = Barracuda
innodb_file_format_max = Barracuda
innodb_log_group_home_dir = /redolog/
innodb_undo_directory = /undolog/
innodb_undo_logs = 128
innodb_undo_tablespaces = 3
innodb_flush_neighbors = 1
innodb_log_file_size = 4G
innodb_log_buffer_size = 16777216
innodb_purge_threads = 4
innodb_large_prefix = 1
innodb_thread_concurrency = 64
innodb_print_all_deadlocks = 1
innodb_strict_mode = 1
innodb_sort_buffer_size = 67108864
########semi sync replication settings########
plugin_dir=/usr/local/mysql/lib/plugin
plugin_load = "rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so"
loose_rpl_semi_sync_master_enabled = 1
loose_rpl_semi_sync_slave_enabled = 1
loose_rpl_semi_sync_master_timeout = 5000
[mysqld-5.7]
innodb_buffer_pool_dump_pct = 40
innodb_page_cleaners = 4
innodb_undo_log_truncate = 1
innodb_max_undo_log_size = 2G
innodb_purge_rseg_truncate_frequency = 128
binlog_gtid_simple_recovery=1
log_timestamps=system
transaction_write_set_extraction=MURMUR32
show_compatibility_56=on

本次需要安装MYSQL 关系型数据库。
主要是要支持JSON的数据格式。

参考文档: http://dev.mysql.com/doc/refman/5.7/en/linux-installation-yum-repo.html

安装环境: Vagrant Centos 6.7

下载MYSQL 配置

# wget http://dev.mysql.com/get/mysql57-community-release-el6-7.noarch.rpm
# yum localinstall ./mysql57-community-release-el6-7.noarch.rpm
# yum install mysql-community-server

修改ROOT密码 MYSQL 5.7 密码验证非常严格。

/etc/my.cnf [mysqld] 下面增加
skip-grant-tables

update mysql.user set authentication_string=password('123qwe') where user='root' and Host = 'localhost'; //修改密码

update mysql.user set Host='%' where user='root' //不限制外部连接

flush privileges;

还原 my.cnf 文件

关闭防火墙

/etc/selinux/config

SELINUX=disabled

setenforce 0 //临时生效

MYSQL Error : 2013

SQL Error (2013): Lost connection to MySQL server at 'waiting for initial communication packet', system error: 0

vi  /etc/sysconfig/iptables
-A INPUT -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT 
service  iptables restart 

个人配置

# Firewall configuration written by system-config-firewall 
# Manual customization of this file is not recommended. 
*filter 
:INPUT ACCEPT [0:0] 
:FORWARD ACCEPT [0:0] 
:OUTPUT ACCEPT [0:0]
 -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
 -A INPUT -p icmp -j ACCEPT
 -A INPUT -i lo -j ACCEPT
 -A INPUT -i eth0 -j ACCEPT
 -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
 -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
 -A INPUT -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT
 -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
 -A FORWARD -p icmp -j ACCEPT
 -A FORWARD -i lo -j ACCEPT
 -A FORWARD -i eth0 -j ACCEPT
 -A INPUT -j REJECT --reject-with icmp-host-prohibited
 -A FORWARD -j REJECT --reject-with icmp-host-prohibited
 COMMIT 

本次需要安装MongODB 文档型数据库。
主要是要支持JSON的数据格式。

参考文档: https://docs.mongodb.org/master/tutorial/install-mongodb-on-red-hat/

安装环境: Vagrant Centos 6.7

首先配置 Centos 的repo

[mongodb-org-3.2]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.2/x86_64/
gpgcheck=0
enabled=1

确定安装 Mongodb 3.2.3

yum install -y mongodb-org-3.2.3 mongodb-org-server-3.2.3 mongodb-org-shell-3.2.3 mongodb-org-mongos-3.2.3 mongodb-org-tools-3.2.3

启动Mongodb

/etc/init.d/mongodb restart

开放端口

iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 27017 -j ACCEPT

初始化用户

# mongo
# use admin
# db.createUser({user:'root',pwd:'root',roles:['readWrite','dbAdmin']})

外部连接

mongodb.conf

bind_ip = 127.0.0.1 //注释此行

初始环境

之前我们已经安装伪部署版本。

按照 之前的计划 我们还有两个节点服务器 36,37 服务器

192.168.31.36 , 192.168.31.37

环境准备

修改三台服务器名称

su vi /etc/sysconfig/network

//192.168.31.30 ,Master
//192.168.31.36 ,Slave36
//192.168.31.37 ,Slave37

映射IP


su vi /etc/hosts
----------------------------------------
192.168.31.30 Master
192.168.31.31 Job
192.168.31.36 Slave36
192.168.31.37 Slave37
----------------------------------------

复制 hadoop 到客户机上面。

// 192.168.31.36,192.168.31.37
cd /public_data/ //这个是我的虚拟机映射的目录,大家根据自己情况下载
cp -Rf ./hadoop/hadoop-2.6.4 /usr/local/hadoop

SSH 登录 实现服务器之间的免登录 (30 免登录到 36 ,37)

方法一:
// 192.168.31.30
su hadoop
cp ~/.ssh/id_rsa.pub /public_data/hadoop/

// 192.168.31.36,192.168.31.37
su hadoop
mkdir ~/.ssh/
cat /public_data/hadoop/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
chmod 700 ~/.ssh/

方法二:

//192.168.31.30
su hadoop
ssh-copy-id 192.168.31.36 
ssh-copy-id 192.168.31.37
//注意如果发送密码完成后。依然不能登录。
chmod 700 ~/.ssh/
chmod 600 ~/.ssh/authorized_keys
//权限不对也可能导致无法登录

配置服务器 192.168.31.30

由之前伪分布式 改成 正式 Master 不在负责存储

// 192.168.31.30
编辑 ./etc/hadoop/slaves 文件
slaves 文件是向 Master 说明有几个节点服务器
---------------------------------------------------------------
#文本
localhost
#更变为
192.168.31.36
192.168.31.37
---------------------------------------------------------------

然后关闭防火墙 (测试方便,关闭三个服务器防火墙)
service iptables stop 

配置节点服务器 192.168.31.36, 192.168.31.37

这两个服务器好配置。只需要将 Master( 192.168.31.30) 上的 core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml 复制过去就好了

 //core-site.xml
 <configuration>
    <property>
            <name>fs.defaultFS</name>
            <value>hdfs://192.168.31.30:9000</value>
    </property>
    <property>
            <name>hadoop.tmp.dir</name>
            <value>file:/usr/local/hadoop/tmp</value>
            <description>Abase for other temporary directories.</description>
    </property>
</configuration>


//hdfs-site.xml
<configuration>
    <!-- 修改IP -->
    <property>
            <name>dfs.namenode.secondary.http-address</name>
            <value>192.168.31.30:50090</value>
    </property>
    <property>
            <name>dfs.replication</name>
            <value>1</value>
    </property>
    <property>
            <name>dfs.namenode.name.dir</name>
            <value>file:/usr/local/hadoop/tmp/dfs/name</value>
    </property>
    <property>
            <name>dfs.datanode.data.dir</name>
            <value>file:/usr/local/hadoop/tmp/dfs/data</value>
    </property>
</configuration>

// mapred-site.xml
<configuration>
    <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
    </property>
    <!-- 新增参数 -->
    <property>
            <name>mapreduce.jobhistory.address</name>
            <value>192.168.31.30:10020</value>
    </property>
    <property>
            <name>mapreduce.jobhistory.webapp.address</name>
            <value>192.168.31.30:19888</value>
    </property>
    <!--  -->
</configuration>

//yarn-site.xml
<configuration>
    <!-- 修改IP -->
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>192.168.31.30</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
</configuration>

启动 192.168.31.30 主服务器

/usr/local/hadoop/sbin/start-dfs.sh

启动主服务器

我们可以看到两个 36,37服务器 节点服务器已经启动成功了

访问 http://192.168.31.30:50070/dfshealth.html

启动主服务器

启动主服务器



#启动YARN  8088 端口
/usr/local/hadoop/sbin/start-yarn.sh 

#启动 historyserver 服务  19888端口 
 /usr/local/hadoop/sbin/mr-jobhistory-daemon.sh start historyserver

测试上传

//创建用户名称
hdfs dfs -mkdir -p /user/hadoop 
//创建用户下的文件夹
hdfs dfs -mkdir box  
//上传到文件夹中 
hdfs dfs -put ./vagrant-centos-6.7.box  box

问题记录

1.在启动Master(192.168.31.30)后,查看

Slave36(192.168.31.36)

Slave37(192.168.31.37)

执行 JPS 名称

检查启动项目
看下 DataNode 是否已经启动

2。NameNode 格式化前
最好将缓存全部请空掉。不然无法启动。每次格式化都会导致DHFS的磁盘UID变更。所以清空就好了。