BurningBright

  • Home

  • Tags

  • Categories

  • Archives

  • Search

RedisTemplate pipline

Posted on 2018-10-05 | Edited on 2018-12-16 | In db

when a job need to update alot redis k/v
use normal command loop is low efficiency plan
that cause tcp connection time-wait exceed several ten thousand.
so we use redis pipline operation merge command connection.

Use RedisTemplate read obj

Set User class deserializer first

1
2
private static final Jackson2JsonRedisSerializer userSerializer = new Jackson2JsonRedisSerializer(User.class);
redisTemplate.setValueSerializer(userSerializer);

pip execute multi get command

1
2
3
4
5
6
7
8
List<User> result = redisTemplate.executePipelined(new RedisCallback<List<User>>() {
@Override
public List<User> doInRedis(RedisConnection connection) throws DataAccessException {
for (Object obj: keys)
connection.get((RedisConstant.USER + obj).getBytes());
return null;
}
}, redisTemplate.getValueSerializer());

Use RedisTemplate write obj

Same as above use User class serializer first

1
2
private static final Jackson2JsonRedisSerializer userSerializer = new Jackson2JsonRedisSerializer(User.class);
userSerializer.serialize(user);

pip execute multi set command

1
2
3
4
5
6
7
8
redisTemplate.executePipelined(new RedisCallback<List>() {
@Override
public List doInRedis(RedisConnection connection) throws DataAccessException {
for (Map.Entry<byte[], byte[]> entry : map.entrySet())
connection.set(entry.getKey(), entry.getValue());
return null;
}
});

Caution

  1. connection return is byte array, it need deserializer method.
  2. doInRedis return must be null, the return value has been take over by executePipelined.

    1
    2
    3
    if (result != null)
    throw new InvalidDataAccessApiUsageException(
    "Callback cannot return a non-null value as it gets overwritten by the pipeline");
  3. don’t call connection.closePipeline(), it will return result and no deserialize.

  4. Deserialize need deserializer object. like these
    QQ20181006095747.png

https://blog.csdn.net/xiaoliu598906167/article/details/82218525
https://blog.csdn.net/huilixiang/article/details/19484921
https://blog.csdn.net/xiaolyuh123/article/details/78682200
https://my.oschina.net/yuyidi/blog/499951
https://www.cnblogs.com/EasonJim/p/7803067.html#autoid-2-6-0

Install mysql windows 5.7.23

Posted on 2018-09-24 | Edited on 2018-12-16 | In db

Install mysql windows in archive version

https://www.mysql.com download package first.

unpackage archive first, add ini file to the dirctory root.

1
2
3
mysqld –initialize-insecure –user=mysql
mysqld install
net start mysql

If success then use mysql -u root -p no password login.
If failed then delete data file , mysqld --initialize reinit mysql.

If login still failed like this:

1
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)

stop mysql net stop mysql
use mysqld --skip-grant-tables or
add skip-grant-tables to my.ini and start to skip mysql privilege check
then :

1
2
update user set authentication_string=password("root") where user="root";
flush privileges;

delete skip configuration, and restart mysql

https://blog.csdn.net/wokaowokaowokao12345/article/details/76736152
https://blog.csdn.net/u012730299/article/details/51840416
https://www.jb51.net/article/100718.htm

synchronize fork project in github

Posted on 2018-09-22 | Edited on 2020-06-17 | In tool

2017/02/03/github-style/

When fork a project then want to follow project owner’s commit,
We need add another local repository.

Add remote repository center

1
2
git remote add liu https://github.com/liuweijw/fw-cloud-framework.git
git remote -v

the name liu is just a alias, separate with you project remote origin

Fetch the owner’s project update

1
2
git fetch liu
git branch -av

Merge branchs

1
2
git checkout master
git merge liu/master

if the merge process have conflict or failed, you may need to solve merging status
or just follow owner’s line reset -hard restore the branch and make the merge fast-forward.

Push

1
2
git push origin
#git push -u origin master -f

last push local synchronized project to you remote repository.
if you are an contributor new pull request may need to notify project owner.

https://www.jianshu.com/p/633ae5c491f5

shadowsocks in aws linux

Posted on 2018-09-09 | Edited on 2018-12-16 | In python

In ubuntu version linux

1
2
3
sudo apt-get update
sudo apt-get install python-pip
sudo pip install shadowsocks

then add config file

1
2
3
4
5
6
7
8
9
10
11
xxx.json

{
"server":"0.0.0.0",
"server_port":8388,
"password": "123456",
"local_port":1080,
"timeout":300,
"method":"aes-256-cfb",
"fast_open": false
}

start service

1
sudo ssserver -c shadowsocks.json -d start

In aws version linux

Install pip first

1
sudo yum install python-pip

If the pip insatll command failed like this

1
pkg_resources.DistributionNotFound: pip==9.0.1

Then install customized pip

1
sudo easy_install pip

Same as above start server

1
sudo pip install shadowsocks

https://github.com/shadowsocks
https://blog.csdn.net/u011054333/article/details/52496303
https://blog.csdn.net/xlx921027/article/details/55102248
https://blog.csdn.net/dlmmu/article/details/78397284
https://blog.csdn.net/daiyutage/article/details/69945850

springboot multi mybatis datasource

Posted on 2018-09-05 | Edited on 2018-12-16 | In java

Add configuration

In application.properties file add data source configuration

1
2
3
4
5
6
7
8
9
10
11
spring.datasource.test1.driverClassName=com.mysql.jdbc.Driver
spring.datasource.test1.url=jdbc:mysql://localhost:3306/test1
spring.datasource.test1.username=root
spring.datasource.test1.password=root

spring.datasource.test2.driverClassName=com.mysql.jdbc.Driver
spring.datasource.test2.url=jdbc:mysql://localhost:3306/test2
spring.datasource.test2.username=root
spring.datasource.test2.password=root

mybatis.mapperLocations=classpath:sql-mappers/**/*.xml

Self define data source

New configuration

Point out base package path, sql session reference.

1
2
3
4
5
6
@Configuration
@MapperScan(basePackages="com.burningbright.test1",
sqlSessionFactoryRef="test1SqlSessionFactory")
public class Datasource1 {
...
}

Inject xml path properties

Put mapper xml location into configuration class

1
2
@Value("${mybatis.mapperLocations}")
private String mapperLocation;

Add properties holder

1
2
3
4
5
@Bean
public static PropertySourcesPlaceholderConfigurer
propertySourcesPlaceholderConfigurer() {
return new PropertySourcesPlaceholderConfigurer();
}

Create datasource

1
2
3
4
5
@Bean(name="test1Datasource")
@ConfigurationProperties(prefix="spring.datasource.test1")
public DataSource testDatasource() {
return DataSourceBuilder.create().build();
}

Create session factory

Make sure bean name is same as the class reference annotation

1
2
3
4
5
6
7
8
9
10
11
@Bean(name="test1SqlSessionFactory")
public SqlSessionFactory testSqlSessionFactory(
@Qualifier("test1Datasource")DataSource dataSource)
throws Exception {
SqlSessionFactoryBean bean=new SqlSessionFactoryBean();
bean.setDataSource(dataSource);
bean.setMapperLocations(
new PathMatchingResourcePatternResolver().
getResources(mapperLocation));
return bean.getObject();
}

Create transaction manager bean

1
2
3
4
5
@Bean(name="test1TransactionManager")
public DataSourceTransactionManager testTransactionManager(
@Qualifier("test1Datasource")DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}

Create session template

1
2
3
4
5
@Bean(name="test1SqlSessionTemplate")
public SqlSessionTemplate testSqlSessionTemplate(
@Qualifier("test1SqlSessionFactory")SqlSessionFactory sqlSessionFactory) {
return new SqlSessionTemplate(sqlSessionFactory);
}

Use data source

Add test data source test2 the same as above

1
2
3
4
5
6
7
8
@SpringBootApplication
@ComponentScan(basePackages={"com.burningbright.test1","com.burningbright.test2"})
@RestController
public class UserController {
@Autowired
UserMapper userMapper;
...
}

  1. Prefix must the same as properties keys’ prefix in application.properties
  2. If @Primary not be annotated, application will throw exception.
    Unless the reseal framework level has a default primary datasource.
  3. @Qualifier inject object by bean’s name.
  4. basePackages is mapper file’s package path.
  5. sqlSessionTemplateRef is the instance’s reference

https://blog.csdn.net/qq_37142346/article/details/78488452
https://blog.csdn.net/a123demi/article/details/74004499
https://blog.csdn.net/hongweigg/article/details/79104321

regex modify in vim

Posted on 2018-08-03 | Edited on 2018-12-16 | In regex
1
2
3
echo print 'hello' > test
echo print 'hello' >> test
echo print 'hello' >> test

use vim edit file vim test

1
2
3
4
5
6
7
8
9
10
11
shift + :
set number

shift + :
s/\vh(.*)o/y\1ow/g

shift + :
2,3s/\vh(.*)o/y\1ow/gc

shift + :
wq
1
2
3
4
more test
print 'yellow'
print 'yellow'
print 'yellow'

2,3 mean match scale in row 2 to 3.

in gc mode, y confirm one match, a confirm all match.

https://blog.csdn.net/hongchangfirst/article/details/10400915

Hide cmd windows in MFC console program

Posted on 2018-08-03 | Edited on 2018-12-16 | In c++
1
2
3
4
5
#ifdef UNICODE
#pragma comment( linker, "/subsystem:\"windows\" /entry:\"wmainCRTStartup\"")
#else
#pragma comment( linker, "/subsystem:\"windows\" /entry:\"mainCRTStartup\"")
#endif

In unicode environment the entry point is wmainCRTStartup.
MFC use wmainCRTStartup to start program’s main, the linker subsystem
can choose CONSOLE|WINDOWS|NATIVE|POSIX, if settle WINDOWS mean no console.

TIM20180803095930.jpg
TIM20180803095930.jpg

https://blog.csdn.net/mdcire/article/details/53456673
https://blog.csdn.net/wenhao_ir/article/details/50897312

Springboot Schedule

Posted on 2018-07-25 | Edited on 2018-12-16 | In java

Maven config

1
2
3
4
5
6
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
</dependencies>

Add @EnableScheduling annotation in application main or configuration class

Thread pool

In org.springframework.scheduling.config.ScheduledTaskRegistrar
after the after properties set, spring set a default single thread excutor.

1
2
3
4
5
6
7
8
protected void scheduleTasks() {
long now = System.currentTimeMillis();
if(this.taskScheduler == null) {
this.localExecutor = Executors.newSingleThreadScheduledExecutor();
this.taskScheduler = new ConcurrentTaskScheduler(this.localExecutor);
}
...
}

When the program’s schedules need time to deal with the job.
Single Thread excutor may cause job be blocked, then the job’s excuted time might be wrong.

So when the single node have hard works need be done precisely, we need a selfdefine threadpool.

1
2
3
4
5
6
7
8
9
10
11
12
@Component
public class ScheduleConfig implements SchedulingConfigurer {
@Override
public void configureTasks(ScheduledTaskRegistrar taskRegistrar) {
taskRegistrar.setScheduler(taskExecutor());
}

@Bean(destroyMethod="shutdown")
public Executor taskExecutor() {
return Executors.newScheduledThreadPool(20);
}
}

Close

When the web application closed, the threadpool is still alive, we need actively close it.

1
2
3
4
5
6
7
8
9
10
@Component
public class MyTask implements DisposableBean{
@Override
public void destroy() throws Exception {
ThreadPoolTaskScheduler scheduler =
(ThreadPoolTaskScheduler)applicationContext.getBean("scheduler");
scheduler.shutdown();
}
...
}

Cluster

If the schedule needs enterprise level structure, a schedule cluster is needed.
Use consistency middle ware/[DB]/[Cache] control the schedule node’s behaviour.
Use Message queue control the job’s status/ corn, even add/ remove jobs.
Spring schedule not support the persistency feature, but through mechanism it still can be resolved.
If the requirement is that level, use spring schedule may not be a good choice.

https://www.cnblogs.com/skychenjiajun/p/9057379.html
https://blog.csdn.net/qq_34125349/article/details/77430956

Kafka quickstart in windows

Posted on 2018-07-11 | Edited on 2018-12-16 | In db

Download

download kafka package from http://kafka.apache.org/downloads

un-tar package kafka_2.11-1.1.0.tgz

Start

copy config file in bin\windows
Start zookeeper first, because kafka is a cluster type middleware.

1
zookeeper-server-start.bat config/zookeeper.properties

If alert Error: can't find or can't load main class ...,
modify bin\windows\kafka-run-class.bat

add Double quotation mark to %CLASSPATH% like this : "%CLASSPATH%"

Now start the Kafka server:

1
kafka-server-start.sh config/server.properties

Create topic

1
2
3
kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

kafka-topics.bat --list --zookeeper localhost:2181

Send messages[Producer]

1
2
3
kafka-console-producer.bat --broker-list localhost:9092 --topic test
> Hello world
> Hello topic

Start a consumer

1
2
3
4
kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test --from-beginning
Hello world
Hello topic
_

Multi-broker cluster

copy server.properties

1
2
copy config/server.properties config/server-1.properties
copy config/server.properties config/server-2.properties

modify configuration:
In same machine, node’s id/ port must be unique, log file should be seperated.

1
2
3
4
5
6
7
8
9
config/server-1.properties:
broker.id=1
listeners=PLAINTEXT://:9093
log.dir=/tmp/kafka-logs-1

config/server-2.properties:
broker.id=2
listeners=PLAINTEXT://:9094
log.dir=/tmp/kafka-logs-2

Now start other two broker

1
2
kafka-server-start.sh config/server-1.properties
kafka-server-start.sh config/server-2.properties

Create new topic

1
kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic

Show detail

1
kafka-topics.bat --describe --zookeeper localhost:2181 --topic my-replicated-topic

1
2
3
kafka-topics.bat --describe --zookeeper localhost:2181 --topic my-replicated-topic
Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs:
Topic: my-replicated-topic Partition: 0 Leader: 2 Replicas: 2,0,1 Isr: 2,0,1
  • “leader” is the node responsible for all reads and writes for the given partition.
    Each node will be the leader for a randomly selected portion of the partitions.
  • “replicas” is the list of nodes that replicate the log for this partition
    regardless of whether they are the leader or even if they are currently alive.
  • “isr” is the set of “in-sync” replicas. This is the subset of the replicas list
    that is currently alive and caught-up to the leader.

send some message

1
2
3
4
kafka-console-producer.bat --broker-list localhost:9092 --topic my-replicated-topic
> test message 1
> test message 2
>

consume these messages

1
2
3
4
kafka-console-consumer.bat --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
test message 1
test message 2
_

now the leader broker id is 2, kill it[ ctrl+c ]

1
2
3
4
> wmic process where "caption = 'java.exe' and commandline like '%server-2.properties%'" get processid
ProcessId
6016
> taskkill /pid 6016 /f

1
2
3
kafka-topics.bat --describe --zookeeper localhost:2181 --topic my-replicated-topic
Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs:
Topic: my-replicated-topic Partition: 0 Leader: 0 Replicas: 2,0,1 Isr: 0,1

now only 0,1 broker ‘in-sync’, the leader is ‘0’ broker
send message still available.

import / export data

make a test file

1
2
> echo foo> test.txt
> echo bar>> test.txt

1
connect-standalone.bat config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties
1
2
3
> type test.sink.txt
foo
bar
1
2
3
kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic connect-test --from-beginning
{"schema":{"type":"string","optional":false},"payload":"foo"}
{"schema":{"type":"string","optional":false},"payload":"bar"}

If add more string append to test.txt, topic will load more data, and test.sink.txt will in-sync too.

http://kafka.apache.org/quickstart
https://blog.csdn.net/cx2932350/article/details/78870135

Convert picture to customized size

Posted on 2018-06-15 | Edited on 2018-12-16 | In java

Main method

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
public static BufferedImage imageConvert(InputStream inputStream, Integer limit) throws IOException {
if (inputStream == null || limit == null) return null;
// init
BufferedImage inputImage = ImageIO.read(inputStream);
int size = getImageSize(inputImage);
if (size <= 0 || inputImage == null) return null;

// loop reduce pixel
while (size > imgLimit) {
// calculate edge shrink propotion
// minus .05 improve convert speed
double rate = (double)imgLimit/size - .05;
// if origin is 20 times than limit, rate may little than 0
// set ten times smaller than origin as default
if (rate < 0) rate = .1;
rate = Math.sqrt(rate);

int width = (int)(inputImage.getWidth(null) * rate);
int height = (int)(inputImage.getHeight(null) * rate);
BufferedImage outputImage = imageCompress(inputImage, width, height);
inputImage = outputImage;

// recalculate
size = getImageSize(inputImage);
}
return inputImage;
}

Assistant method

1
2
3
4
5
6
7
public static int getImageSize(BufferedImage img) throws IOException {
ByteArrayOutputStream byteOS = new ByteArrayOutputStream();
ImageIO.write(img, "jpg", byteOS);
int size = byteOS.size();
byteOS.close();
return size;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
public static BufferedImage imageCompress(Image img, int w, int h) throws IOException {
BufferedImage image = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB );
// redraw converted graph
image.getGraphics().drawImage(img, 0, 0, w, h, null);

ByteArrayOutputStream byteOS = new ByteArrayOutputStream(256);
ImageIO.write(image, "jpg", byteOS);
ByteArrayInputStream byteIS = new ByteArrayInputStream(byteOS.toByteArray());
BufferedImage product = ImageIO.read(byteIS);
byteOS.close();
byteIS.close();
return product;
}

Summary

Use memory to loop picture may cause leak problem.
Use 3th part software maybe a better choice? like ffmpeg.

https://www.cnblogs.com/shoufengwei/p/8526105.html

1…121314…29

Leon

282 posts
20 categories
58 tags
GitHub
Links
  • clock
  • typing-cn
  • mathjax
  • katex
  • cron
  • dos
  • keyboard
  • regex
  • sql
  • toy
© 2017 – 2024 Leon
Powered by Hexo v3.9.0
|
Theme – NexT.Muse v7.1.2