通过MyBatis配置文件创建读写分离两个DataSource,每个SqlSessionFactoryBean对象的mapperLocations属性制定两个读写数据源的配置文件。将所有读的操作配置在读文件中,所有写的操作配置在写文件中。
优点:实现简单
缺点:维护麻烦,需要对原有的xml文件进行重新修改,不支持多读,不易扩展
实现方式
通过MyBatis配置文件创建读写分离两个DataSource,每个SqlSessionFactoryBean对象的mapperLocations属性制定两个读写数据源的配置文件。将所有读的操作配置在读文件中,所有写的操作配置在写文件中。
优点:实现简单
缺点:维护麻烦,需要对原有的xml文件进行重新修改,不支持多读,不易扩展
实现方式
面向切面编程(AOP是Aspect Oriented Program的首字母缩写) ,我们知道,面向对象的特点是继承、多态和封装。而封装就要求将功能分散到不同的对象中去,这在软件设计中往往称为职责分配。实际上也就是说,让不同的类设计不同的方法。这样代码就分散到一个个的类中去了。这样做的好处是降低了代码的复杂程度,使类可重用。
懒得去用JDK动态代理,也懒得用CGLIB动态代理,直接简单的反射做个例子。
public static void main(String[] args) {
// TODO Auto-generated method stubString method = “add”;
try {
Class classz = Class.forName(testService.class.getName());
// Object obj = classz.newInstance();
Object obj = classz.getDeclaredConstructor(String.class).newInstance(“我是参数”);Method[] methods = classz.getDeclaredMethods();
for (int i = 0; i < methods.length; i++) {
if (methods[i].getName().equals(method)) {
System.out.println(“——befor——:You can do something in here.”);
methods[i].invoke(obj, args);
System.out.println(“——after——:You can do something in here too.”);}
}} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (InstantiationException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IllegalAccessException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IllegalArgumentException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (InvocationTargetException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (NoSuchMethodException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (SecurityException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}}
就一个类实例化工厂,然后再调用方法前设置一个before方法,当类方法调用完毕后,再设置一个after方法。
package test;
public class testService {
private final String name;
public testService(String n) {
this.name = n;
}public void add() {
System.out.println(“invoke Class:”+this.getClass().getName()+” and call the method:add->”+name);
}
}
当然,如果使用CGLIB动态代理会更接单,这里出示网上的一个例子:
代理类
import net.sf.cglib.proxy.Enhancer;
import net.sf.cglib.proxy.MethodInterceptor;
import net.sf.cglib.proxy.MethodProxy;import java.lang.reflect.Method;
public class CglibProxy implements MethodInterceptor {
private Enhancer enhancer = new Enhancer();
public Object getProxy(Class clazz) {
enhancer.setSuperclass(clazz);
enhancer.setCallback(this);
return enhancer.create();
}public Object intercept(Object obj, Method method, Object[] args, MethodProxy proxy)
throws Throwable {
System.out.println(“start…”);
Object result = proxy.invokeSuper(obj, args);
System.out.println(“end…”);
return result;
}
}
接着需要一个实现类:
public class CgService {
public void say(){
System.out.println(“CgService hello”);
}public void miss(){
System.out.println(“miss game”);
}
}
然后测试:
public class Test {
public static void main(String[] args) throws IOException {
CgService cgService = (CgService) cglibProxy.getProxy(CgService.class);
cgService.say();
cgService.miss();
}
}
一、Kafka操作
1.启动kafka命令:
#cd /opt/kafka_2.10-0.10.1.1/bin;
# ./kafka-server-start.sh /opt/kafka_2.10-0.10.1.1/config/server.properties &;
2.停止kafka命令:
# ./kafka-server-stop.sh
3.创建Topic:(创建一个名为test的topic,只有一个副本,一个分区。)
#./kafka-topics.sh –create –zookeeper 127.0.0.1:2181 –replication-factor 1 –partitions 1 –topic test
4.列出所有Topic:
#./kafka-topics.sh -list -zookeeper 127.0.0.1:2181
5.启动Producer并发送消息:
#./kafka-console-producer.sh –broker-list localhost:9092 –topic test
(输入相应的消息,eg:hello kafka;按Ctrl+C结束)
6.启动Consumer并接收消息:
#./kafka-console-consumer.sh –zookeeper 127.0.0.1:2181 –topic test –from-beginning
7.前台启动kafka:
./kafka-server-start.sh ../config/server.properties
8.后台启动kafka:
./kafka-server-start.sh ../config/server.properties 1>/dev/null 2>&1 &
9.指定监听端口
JMX_PORT=2898
./kafka-server-start.sh ../config/server.properties 1>/dev/null 2>&1 &
二、Zookeeper常用操作
1.Zookeeper服务端启动:
# cd /opt/zookeeper-3.4.10/bin/
#./zkServer.sh start
2.Zookeeper服务端停止:
# cd /opt/zookeeper-3.4.10/bin/
#./zkServer.sh stop
3.Zookeeper服务端重启:
# cd /opt/zookeeper-3.4.10/bin/
#./zkServer.sh restart
4.查看Zookeeper进程:
#ps -ef|grep zookeeper;
5.查看Zookeeper服务端状态:
# cd /opt/zookeeper-3.4.10/bin/
#./zkServer.sh status
6.Zookeeper客户端登陆:
# cd /opt/zookeeper-3.4.10/bin/
#./zkCli.sh -server 127.0.0.1:2181
一、安装zookeeper
1.下载安装包:http://zookeeper.apache.org/releases.html#download;
2.进入Zookeeper设置目录,笔者D:\kafka\zookeeper-3.4.11\conf;
3. 将“zoo_sample.cfg”重命名为“zoo.cfg” ;
3. 编辑zoo.cfg配置文件;
4. 找到并编辑
dataDir=/tmp/zookeeper 并更改成您当前的路径;
5. 系统环境变量:
a. 在系统变量中添加ZOOKEEPER_HOME = D:\kafka\zookeeper-3.4.11
b. 编辑path系统变量,添加为路径%ZOOKEEPER_HOME%\bin;
6. 在zoo.cfg文件中修改默认的Zookeeper端口(默认端口2181);
7.打开新的cmd,输入zkServer,运行Zookeeper;
出现如下图片表示成功:
二、安装并运行Kafka
1.下载Kafka:http://kafka.apache.org/downloads.html
2. 进入Kafka配置目录,D:\kafka\kafka_2.12-1.0.1\config;
3. 编辑文件“server.properties” ;
4. 找到并编辑log.dirs=/tmp/kafka-logs 改成您当前可用的目录;
5. 找到并编辑zookeeper.connect=localhost:2181;
6. Kafka会按照默认,在9092端口上运行,并连接zookeeper的默认端口:2181。
运行Kafka代码:.\bin\windows\kafka-server-start.bat .\config\server.properties
注:请确保在启动Kafka服务器前,Zookeeper实例已经准备好并开始运行。
三、Kafka代码的实现
1.生产者配置文件:
@Bean
public Map<String,Object> getDefaultFactoryArg(){
Map<String,Object> arg = new HashMap<>();
arg.put("bootstrap.servers",ConstantKafka.KAFKA_SERVER);
arg.put("group.id","100");
arg.put("retries","1");
arg.put("batch.size","16384");
arg.put("linger.ms","1");
arg.put("buffer.memory","33554432");
arg.put("key.serializer","org.apache.kafka.common.serialization.StringSerializer");
arg.put("key.deserializer","org.apache.kafka.common.serialization.StringDeserializer");
arg.put("value.serializer","org.apache.kafka.common.serialization.StringSerializer");
arg.put("value.deserializer","org.apache.kafka.common.serialization.StringDeserializer");
return arg;
}
@Bean
public DefaultKafkaProducerFactory defaultKafkaProducerFactory(){
DefaultKafkaProducerFactory factory = new DefaultKafkaProducerFactory(this.getDefaultFactoryArg());
return factory;
}
@Bean
public KafkaTemplate kafkaTemplate(){
KafkaTemplate template = new KafkaTemplate(defaultKafkaProducerFactory());
template.setDefaultTopic(ConstantKafka.KAFKA_TOPIC1);
template.setProducerListener(kafkaProducerListener());
return template;
}
@Bean
public KafkaProducerListener kafkaProducerListener(){
KafkaProducerListener listener = new KafkaProducerListener();
return listener;
}
2.消费者配置文件:
@Bean
public Map<String,Object> getDefaultArgOfConsumer(){
Map<String,Object> arg = new HashMap<>();
arg.put("bootstrap.servers",ConstantKafka.KAFKA_SERVER);
arg.put("group.id","100");
arg.put("enable.auto.commit","false");
arg.put("auto.commit.interval.ms","1000");
arg.put("auto.commit.interval.ms","15000");
arg.put("key.serializer","org.apache.kafka.common.serialization.StringSerializer");
arg.put("key.deserializer","org.apache.kafka.common.serialization.StringDeserializer");
arg.put("value.serializer","org.apache.kafka.common.serialization.StringSerializer");
arg.put("value.deserializer","org.apache.kafka.common.serialization.StringDeserializer");
return arg;
}
@Bean
public DefaultKafkaConsumerFactory defaultKafkaConsumerFactory(){
DefaultKafkaConsumerFactory factory = new DefaultKafkaConsumerFactory(getDefaultArgOfConsumer());
return factory;
}
@Bean
public KafkaConsumerMessageListener kafkaConsumerMessageListener(){
KafkaConsumerMessageListener listener = new KafkaConsumerMessageListener();
return listener;
}
/**
* 监听频道-log
* @return
*/
@Bean
public ContainerProperties containerPropertiesOfLog(){
ContainerProperties properties = new ContainerProperties(ConstantKafka.KAFKA_TOPIC1);
properties.setMessageListener(kafkaConsumerMessageListener());
return properties;
}
/**
* 监听频道-other
* @return
*/
@Bean
public ContainerProperties containerPropertiesOfOther(){
ContainerProperties properties = new ContainerProperties(ConstantKafka.KAFKA_TOPIC2);
properties.setMessageListener(kafkaConsumerMessageListener());
return properties;
}
@Bean(initMethod = "doStart")
public KafkaMessageListenerContainer kafkaMessageListenerContainerOfLog(){
KafkaMessageListenerContainer container = new KafkaMessageListenerContainer(defaultKafkaConsumerFactory(),containerPropertiesOfLog());
return container;
}
@Bean(initMethod = "doStart")
public KafkaMessageListenerContainer kafkaMessageListenerContainerOfOther(){
KafkaMessageListenerContainer container = new KafkaMessageListenerContainer(defaultKafkaConsumerFactory(),containerPropertiesOfOther());
return container;
}
3.生产消息服务
@Service
public class KafkaProducerServer implements IKafkaProducerServer {
@Autowired
private KafkaTemplate kafkaTemplate;
public static final String ROLE_LOG = "log";
public static final String ROLE_web = "web";
public static final String ROLE_APP = "app";
/**
* 发送消息
* @param topic 频道
* @param msg 消息对象
* @param isUsePartition 是否使用分区
* @param partitionNum 分区数,如果isUsePartition为true,此数值必须>0
* @param role 角色:app,web
* @return
* @throws IllegalServiceException
*/
@Override
public ResultResp<Void> send(String topic, Object msg, boolean isUsePartition, Integer partitionNum, String role) throws IllegalServiceException {
if (role == null){
role = ROLE_LOG;
}
String key = role + "_" + msg.hashCode();
String valueString = JsonUtil.ObjectToJson(msg, true);
if (isUsePartition) {
//表示使用分区
int partitionIndex = getPartitionIndex(key, partitionNum);
ListenableFuture<SendResult<String, Object>> result = kafkaTemplate.send(topic, partitionIndex, key, valueString);
return checkProRecord(result);
} else {
ListenableFuture<SendResult<String, Object>> result = kafkaTemplate.send(topic, key, valueString);
return checkProRecord(result);
}
}
/**
* 根据key值获取分区索引
*
* @param key
* @param num
* @return
*/
private int getPartitionIndex(String key, int num) {
if (key == null) {
Random random = new Random();
return random.nextInt(num);
} else {
int result = Math.abs(key.hashCode()) % num;
return result;
}
}
/**
* 检查发送返回结果record
*
* @param res
* @return
*/
private ResultResp<Void> checkProRecord(ListenableFuture<SendResult<String, Object>> res) {
ResultResp<Void> resp = new ResultResp<>();
resp.setCode(ConstantKafka.KAFKA_NO_RESULT_CODE);
resp.setInfo(ConstantKafka.KAFKA_NO_RESULT_MES);
if (res != null) {
try {
SendResult r = res.get();//检查result结果集
/*检查recordMetadata的offset数据,不检查producerRecord*/
Long offsetIndex = r.getRecordMetadata().offset();
if (offsetIndex != null && offsetIndex >= 0) {
resp.setCode(ConstantKafka.SUCCESS_CODE);
resp.setInfo(ConstantKafka.SUCCESS_MSG);
} else {
resp.setCode(ConstantKafka.KAFKA_NO_OFFSET_CODE);
resp.setInfo(ConstantKafka.KAFKA_NO_OFFSET_MES);
}
} catch (InterruptedException e) {
e.printStackTrace();
resp.setCode(ConstantKafka.KAFKA_SEND_ERROR_CODE);
resp.setInfo(ConstantKafka.KAFKA_SEND_ERROR_MES + ":" + e.getMessage());
} catch (ExecutionException e) {
e.printStackTrace();
resp.setCode(ConstantKafka.KAFKA_SEND_ERROR_CODE);
resp.setInfo(ConstantKafka.KAFKA_SEND_ERROR_MES + ":" + e.getMessage());
}
}
return resp;
}
}
4.生产者监听服务
public class KafkaProducerListener implements ProducerListener {
protected final Logger logger = Logger.getLogger(KafkaProducerListener.class.getName());
public KafkaProducerListener(){
}
@Override
public void onSuccess(String topic, Integer partition, Object key, Object value, RecordMetadata recordMetadata) {
logger.info("-----------------kafka发送数据成功");
logger.info("----------topic:"+topic);
logger.info("----------partition:"+partition);
logger.info("----------key:"+key);
logger.info("----------value:"+value);
logger.info("----------RecordMetadata:"+recordMetadata);
logger.info("-----------------kafka发送数据结束");
}
@Override
public void onError(String topic, Integer partition, Object key, Object value, Exception e) {
logger.info("-----------------kafka发送数据失败");
logger.info("----------topic:"+topic);
logger.info("----------partition:"+partition);
logger.info("----------key:"+key);
logger.info("----------value:"+value);
logger.info("-----------------kafka发送数据失败结束");
e.printStackTrace();
}
/**
* 是否启动Producer监听器
* @return
*/
@Override
public boolean isInterestedInSuccess() {
return false;
}
}
5.消费者监听服务
public class KafkaConsumerMessageListener implements MessageListener<String,Object> {
private Logger logger = Logger.getLogger(KafkaConsumerMessageListener.class.getName());
public KafkaConsumerMessageListener(){
}
/**
* 消息接收-LOG日志处理
* @param record
*/
@Override
public void onMessage(ConsumerRecord<String, Object> record) {
logger.info("=============kafka消息订阅=============");
String topic = record.topic();
String key = record.key();
Object value = record.value();
long offset = record.offset();
int partition = record.partition();
if (ConstantKafka.KAFKA_TOPIC1.equals(topic)){
doSaveLogs(value.toString());
}
logger.info("-------------topic:"+topic);
logger.info("-------------value:"+value);
logger.info("-------------key:"+key);
logger.info("-------------offset:"+offset);
logger.info("-------------partition:"+partition);
logger.info("=============kafka消息订阅=============");
}
private void doSaveLogs(String data){
SocketIOPojo<BikeLogPojo> logs = JsonUtil.JsonToObject(data.toString(),SocketIOPojo.class);
/**
* 写入到数据库中
*/
}
}
测试图片:
我这边主要是搭建出来做测试用的,真实环境还是要使用linux。
环境需求:
Redis-win-3.2.100
Ruby-win-2.2.4-x64
Redis-3.2.2.gem(ruby驱动,需要对应redis的版本号)
Redis-trib.rb源码
1.安装Redis,并运行3个实例(Redis集群需要至少3个以上节点,低于3个无法创建);
2.使用redis-trib.rb工具来创建Redis集群,由于该文件是用ruby语言写的,所以需要安装Ruby开发环境,以及驱动redis-xxxx.gem。
1.下载并安装Redis
GitHub路径如下:https://github.com/MSOpenTech/redis/releases/
Redis提供msi和zip格式的下载文件,这里下载zip格式3.2.100版本
将下载到的Redis-win-3.2.100.zip解压即可,为了方便使用,建议放在盘符根目录下,并修改目录名为Redis,如:C:\Redis 或者D:\Redis。当然也可以放在桌面,只要你喜欢。
通过配置文件来启动3个不同的Redis实例,由于Redis默认端口为6379,所以这里使用了7000、7001、7002来运行3个Redis实例。
近期评论