mysql存储过程学习

李魔佛 发表了文章 • 0 个评论 • 27 次浏览 • 2020-10-18 16:38 • 来自相关话题

数据库存储过程,以前没怎么用过了,记录下具体的用法:
环境 : MySQL8 + Navicat
 
1 .创建一个存储过程:
在navicat的查询窗口执行:
delimiter $$
create procedure hello_procedure ()
begin
select 'hello world';
END $$

call hello_procedure();
 
  查看全部
数据库存储过程,以前没怎么用过了,记录下具体的用法:
环境 : MySQL8 + Navicat
 
1 .创建一个存储过程:
在navicat的查询窗口执行:
delimiter $$
create procedure hello_procedure ()
begin
select 'hello world';
END $$

call hello_procedure();

 
 

DBUtils 包名更新为dbutils,居然大部分包名都由驼峰命名改为下划线了

李魔佛 发表了文章 • 0 个评论 • 98 次浏览 • 2020-10-04 17:01 • 来自相关话题

原来的调用是这样子的:
 
from DBUtils.PooledDB import PooledDB, SharedDBConnection
POOL = PooledDB
现在是这样的了:
 
from dbutils.persistent_db import PersistentDB至于使用方法还是和原来的差不多。
  查看全部
原来的调用是这样子的:
 
from DBUtils.PooledDB import PooledDB, SharedDBConnection
POOL = PooledDB

现在是这样的了:
 
from dbutils.persistent_db  import PersistentDB
至于使用方法还是和原来的差不多。
 

在Docker中配置Kibana连接ElasticSearch的一些小坑

李魔佛 发表了文章 • 0 个评论 • 769 次浏览 • 2020-08-09 01:49 • 来自相关话题

之前编译部署的时候只需要在config/kibana.yaml 中修改host ,把默认的 http://elasticsearch:9200 改为 http://127.0.0.1:9200 , 如果你的ElasticSearch带密码访问,只需在下面加多2行
elasticseacrh.user='elastic'
elasticsearch.password='xxxxxx'  # 你之前配置ES时设置的密码
 
BUT, 上面的配置在docker环境下无法正常启动使用kibana, 通过docker logs 容器ID, 查看的日志信息:
 
log [17:39:09.057] [warning][data][elasticsearch] No living connections
log [17:39:09.058] [warning][licensing][plugins] License information could not be obtained from Elasticsearch due to Error: No Living connections error
log [17:39:09.635] [warning][admin][elasticsearch] Unable to revive connection: http://127.0.0.1:9200/
log [17:39:09.636] [warning][admin][elasticsearch] No living connections
log [17:39:12.137] [warning][admin][elasticsearch] Unable to revive connection: http://127.0.0.1:9200/
log [17:39:12.138] [warning][admin][elasticsearch] No living connections
log [17:39:14.640] [warning][admin][elasticsearch] Unable to revive connection: http://127.0.0.1:9200/
log [17:39:14.640] [warning][admin][elasticsearch] No living connections
log [17:39:17.143] [warning][admin][elasticsearch] Unable to revive connection: http://127.0.0.1:9200/
log [17:39:17.143] [warning][admin][elasticsearch] No living connections
log [17:39:19.645] [warning][admin][elasticsearch] Unable to revive connection: http://127.0.0.1:9200/使用curl访问127.0.0.1:920也是正常的,后来想到docker貌似没有配置桥接网络,两个docker可能无法互通,故把kibana.yaml里面的host改为主机的真实IP(内网172网段ip),然后问题就得到解决了。 查看全部
之前编译部署的时候只需要在config/kibana.yaml 中修改host ,把默认的 http://elasticsearch:9200 改为 http://127.0.0.1:9200 , 如果你的ElasticSearch带密码访问,只需在下面加多2行
elasticseacrh.user='elastic'
elasticsearch.password='xxxxxx'  # 你之前配置ES时设置的密码
 
BUT, 上面的配置在docker环境下无法正常启动使用kibana, 通过docker logs 容器ID, 查看的日志信息:
 
  log   [17:39:09.057] [warning][data][elasticsearch] No living connections
log [17:39:09.058] [warning][licensing][plugins] License information could not be obtained from Elasticsearch due to Error: No Living connections error
log [17:39:09.635] [warning][admin][elasticsearch] Unable to revive connection: http://127.0.0.1:9200/
log [17:39:09.636] [warning][admin][elasticsearch] No living connections
log [17:39:12.137] [warning][admin][elasticsearch] Unable to revive connection: http://127.0.0.1:9200/
log [17:39:12.138] [warning][admin][elasticsearch] No living connections
log [17:39:14.640] [warning][admin][elasticsearch] Unable to revive connection: http://127.0.0.1:9200/
log [17:39:14.640] [warning][admin][elasticsearch] No living connections
log [17:39:17.143] [warning][admin][elasticsearch] Unable to revive connection: http://127.0.0.1:9200/
log [17:39:17.143] [warning][admin][elasticsearch] No living connections
log [17:39:19.645] [warning][admin][elasticsearch] Unable to revive connection: http://127.0.0.1:9200/
使用curl访问127.0.0.1:920也是正常的,后来想到docker貌似没有配置桥接网络,两个docker可能无法互通,故把kibana.yaml里面的host改为主机的真实IP(内网172网段ip),然后问题就得到解决了。

(1412, 'Table definition has changed, please retry transaction')

李魔佛 发表了文章 • 0 个评论 • 554 次浏览 • 2020-06-13 23:20 • 来自相关话题

python访问数据库时报这个错误:
(1412, 'Table definition has changed, please retry transaction')

一个原因是因为多个线程操作数据库,当前的cursor对应的数据库被切换到其他库了。
比如 cursor在数据库A,正在不断query一条语句,然后另一个线程在在切换cursor到另外一个数据库B。
  查看全部
python访问数据库时报这个错误:
(1412, 'Table definition has changed, please retry transaction')

一个原因是因为多个线程操作数据库,当前的cursor对应的数据库被切换到其他库了。
比如 cursor在数据库A,正在不断query一条语句,然后另一个线程在在切换cursor到另外一个数据库B。
 

windows下logstash6.5安装配置 网上的教程是linux,有坑

李魔佛 发表了文章 • 2 个评论 • 838 次浏览 • 2020-02-28 10:05 • 来自相关话题

https://www.cnblogs.com/jstarseven/p/7704893.html
百度里搜到的logstash安装配置教程 千篇一律都是这样子。
 
启动服务测试一下是否安装成功:
cd bin
./logstash -e 'input { stdin { } } output { stdout {} }'
上面的是linux运行的。
windows下把logstash改为logstash.bat 然后运行。
 
报错:
ERROR: Unknown command '{'

See: 'bin/logstash --help'
[ERROR] 2020-02-28 10:04:29.307 [main] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
后来搜了下国外的网站。原来问题出在那个单引号上,把单引号改为双引号就可以了。
  查看全部
https://www.cnblogs.com/jstarseven/p/7704893.html
百度里搜到的logstash安装配置教程 千篇一律都是这样子。
 
启动服务测试一下是否安装成功:
cd bin
./logstash -e 'input { stdin { } } output { stdout {} }'

上面的是linux运行的。
windows下把logstash改为logstash.bat 然后运行。
 
报错:
ERROR: Unknown command '{'

See: 'bin/logstash --help'
[ERROR] 2020-02-28 10:04:29.307 [main] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

后来搜了下国外的网站。原来问题出在那个单引号上,把单引号改为双引号就可以了。
 

bandcamp移动开发更简单

linxiaojue 发表了文章 • 0 个评论 • 669 次浏览 • 2019-12-14 05:12 • 来自相关话题

bandcamp移动开发更简单http://ydkfpgjd.bandcamp.com/
http://TalkingData.bandcamp.com/
http://Bugly.bandcamp.com/
http://Box2D.bandcamp.com/
http://aineice.bandcamp.com/
http://wyyp.bandcamp.com/
http://Prepo.bandcamp.com/
http://Chipmunk.bandcamp.com/
http://openinstall.bandcamp.com/
http://MobileInsight.bandcamp.com/
http://zhugelo.bandcamp.com/
http://CobubRazor.bandcamp.com/
http://Testin.bandcamp.com/
http://crashlytics.bandcamp.com/
http://APKProtect.bandcamp.com/
http://Ucloud.bandcamp.com/
http://ydkfpgj.bandcamp.com/releases
http://TalkingData.bandcamp.com/releases
http://Bugly.bandcamp.com/releases
http://Box2D.bandcamp.com/releases
http://aineice.bandcamp.com/releases
http://wyyp.bandcamp.com/releases
http://Prepo.bandcamp.com/releases
http://Chipmunk.bandcamp.com/releases
http://openinstall.bandcamp.com/releases
http://MobileInsight.bandcamp.com/releases
http://zhugelo.bandcamp.com/releases
http://CobubRazor.bandcamp.com/releases
http://Testin.bandcamp.com/releases
http://crashlytics.bandcamp.com/releases
http://APKProtect.bandcamp.com/releases
http://Ucloud.bandcamp.com/releases 查看全部

mongodb日期条件查找

李魔佛 发表了文章 • 0 个评论 • 606 次浏览 • 2019-12-10 10:41 • 来自相关话题

有时候用new Date() 不管用
有时候用new Date() 不管用

python redis.StrictRedis.from_url 连接redis

李魔佛 发表了文章 • 0 个评论 • 1973 次浏览 • 2019-08-23 16:43 • 来自相关话题

python redis.StrictRedis.from_url 连接redis
用url的方式连接redis
 
r=redis.StrictRedis.from_url(url)
 
url为以下的格式:
 redis://[:password]@localhost:6379/0
rediss://[:password]@localhost:6379/0
unix://[:password]@/path/to/socket.sock?db=0
 原创文章,转载请注明出处:
http://30daydo.com/article/527
  查看全部
python redis.StrictRedis.from_url 连接redis
用url的方式连接redis
 
r=redis.StrictRedis.from_url(url)
 
url为以下的格式:
 
redis://[:password]@localhost:6379/0
rediss://[:password]@localhost:6379/0
unix://[:password]@/path/to/socket.sock?db=0

 原创文章,转载请注明出处:
http://30daydo.com/article/527
 

mongodb 判断列表字段不为空

李魔佛 发表了文章 • 0 个评论 • 3561 次浏览 • 2019-08-20 11:08 • 来自相关话题

首先插入一批数据:
db.test_tab.insert({array:[]})
db.test_tab.insert({array:[]})
db.test_tab.insert({array:[]})
db.test_tab.insert({array:[1,2,3,4,5]})
db.test_tab.insert({array:[1,2,3,4,5,6]})
使用以下命令判断列表不为空:
db.getCollection("example").find({array:{$exists:true,$ne:[]}}); # 字段不为0 查看全部
首先插入一批数据:
db.test_tab.insert({array:[]})
db.test_tab.insert({array:[]})
db.test_tab.insert({array:[]})
db.test_tab.insert({array:[1,2,3,4,5]})
db.test_tab.insert({array:[1,2,3,4,5,6]})

使用以下命令判断列表不为空:
db.getCollection("example").find({array:{$exists:true,$ne:[]}}); # 字段不为0

redis health_check_interval 参数无效

李魔佛 发表了文章 • 0 个评论 • 1920 次浏览 • 2019-08-09 16:13 • 来自相关话题

因为一直在循环阻塞里面监听redis的发布者,时间长了,redis就掉线了或者网络终端,就会一直卡在等待接受,而发布者后续发布的数据就接收不到了。
 
# helper
class RedisHelp(object):

def __init__(self,channel):
# self.pool = redis.ConnectionPool('10.18.6.46',port=6379)

# self.conn = redis.Redis(connection_pool=self.pool)
# 上面的方式无法使用订阅者 发布者模式

self.conn = redis.Redis(host='10.18.6.46')
self.publish_channel = channel
self.subscribe_channel = channel


def publish(self,msg):
self.conn.publish(self.publish_channel,msg) # 1. 渠道名 ,2 信息

def subscribe(self):
self.pub = self.conn.pubsub()
self.pub.subscribe(self.subscribe_channel)
self.pub.parse_response()
print('initial')
return self.pub


helper = RedisHelp('cuiqingcai')

# 订阅者
if sys.argv[1]=='s':
print('in subscribe mode')
pub = helper.subscribe()
while 1:
print('waiting for publish')
pubsub.check_health()
msg = pub.parse_response()

s=str(msg[2],encoding='utf-8')
print(s)
if s=='exit':
break


# 发布者
elif sys.argv[1]=='p':
print('in publish mode')
msg = sys.argv[2]
print(f'msg -> {msg}')
helper.publish(msg)
而官网的文档说使用参数:
health_check_interval=30 # 30s心跳检测一次
 
但实际上这个参数在最新的redis 3.3以上是被去掉了。 所以是无办法使用 self.conn = redis.Redis(host='10.18.6.46',health_check_interval=30)
 
这点在作者的github页面里面也得到了解释。
https://github.com/andymccurdy/redis-py/issues/1199
 
所以要改成
data = client.blpop('key', timeout=300)
300s后超时,data为None,重新监听。
 
  查看全部
因为一直在循环阻塞里面监听redis的发布者,时间长了,redis就掉线了或者网络终端,就会一直卡在等待接受,而发布者后续发布的数据就接收不到了。
 
 # helper
class RedisHelp(object):

def __init__(self,channel):
# self.pool = redis.ConnectionPool('10.18.6.46',port=6379)

# self.conn = redis.Redis(connection_pool=self.pool)
# 上面的方式无法使用订阅者 发布者模式

self.conn = redis.Redis(host='10.18.6.46')
self.publish_channel = channel
self.subscribe_channel = channel


def publish(self,msg):
self.conn.publish(self.publish_channel,msg) # 1. 渠道名 ,2 信息

def subscribe(self):
self.pub = self.conn.pubsub()
self.pub.subscribe(self.subscribe_channel)
self.pub.parse_response()
print('initial')
return self.pub


helper = RedisHelp('cuiqingcai')

# 订阅者
if sys.argv[1]=='s':
print('in subscribe mode')
pub = helper.subscribe()
while 1:
print('waiting for publish')
pubsub.check_health()
msg = pub.parse_response()

s=str(msg[2],encoding='utf-8')
print(s)
if s=='exit':
break


# 发布者
elif sys.argv[1]=='p':
print('in publish mode')
msg = sys.argv[2]
print(f'msg -> {msg}')
helper.publish(msg)

而官网的文档说使用参数:
health_check_interval=30 # 30s心跳检测一次
 
但实际上这个参数在最新的redis 3.3以上是被去掉了。 所以是无办法使用 self.conn = redis.Redis(host='10.18.6.46',health_check_interval=30)
 
这点在作者的github页面里面也得到了解释。
https://github.com/andymccurdy/redis-py/issues/1199
 
所以要改成
data = client.blpop('key', timeout=300)
300s后超时,data为None,重新监听。
 
 

mongodb 修改嵌套字典字典的字段名

李魔佛 发表了文章 • 0 个评论 • 2014 次浏览 • 2019-08-05 13:55 • 来自相关话题

对于mongodb,修改字段名称的语法是

db.test.update({},{$rename:{'旧字段':'新字段'}},true,true)

 
比如下面的例子:db.getCollection('example').update({},{$rename:{'corp':'企业'}})
上面就是把字段corp改为企业。
 
如果是嵌套字段呢?
比如  corp字典是一个字典,里面是 { 'address':'USA',    'phone':'12345678' }
 
那么要修改里面的address为地址:
 db.getCollection('example').update({},{$rename:{'corp.address':'corp.地址'}})
 原创文章,转载请注明出处
原文连接:http://30daydo.com/article/521
  查看全部
对于mongodb,修改字段名称的语法是


db.test.update({},{$rename:{'旧字段':'新字段'}},true,true)


 
比如下面的例子:
db.getCollection('example').update({},{$rename:{'corp':'企业'}})

上面就是把字段corp改为企业。
 
如果是嵌套字段呢?
比如  corp字典是一个字典,里面是 { 'address':'USA',    'phone':'12345678' }
 
那么要修改里面的address为地址:
 
db.getCollection('example').update({},{$rename:{'corp.address':'corp.地址'}})

 原创文章,转载请注明出处
原文连接:http://30daydo.com/article/521
 

mongodb motor 异步操作比同步操作的时间要慢?

量化投机者 回复了问题 • 2 人关注 • 1 个回复 • 1708 次浏览 • 2019-08-03 09:01 • 来自相关话题

mongodb find得到的数据顺序每次都是一样的

李魔佛 发表了文章 • 0 个评论 • 737 次浏览 • 2019-07-26 09:00 • 来自相关话题

只要用的find内容不变,那么返回的内容顺序也就都一样的。
只要用的find内容不变,那么返回的内容顺序也就都一样的。

Django 版本不兼容报错 AuthenticationMiddleware

李魔佛 发表了文章 • 0 个评论 • 3528 次浏览 • 2019-07-04 15:43 • 来自相关话题

Django 2.2.ERRORS:
?: (admin.E408) 'django.contrib.auth.middleware.AuthenticationMiddleware' must be in MIDDLEWARE in order to use the admin application. 
在之前的版本上没有问题,更新后就出错。
降级Django
 
pip install django=2.1.7
 
PS: 这个django的版本兼容的确是个大问题,哪天升级了下django版本,不经过严格的测试就带来灾难性的后果。 查看全部
Django 2.2.
ERRORS:
?: (admin.E408) 'django.contrib.auth.middleware.AuthenticationMiddleware' must be in MIDDLEWARE in order to use the admin application.
 
在之前的版本上没有问题,更新后就出错。
降级Django
 
pip install django=2.1.7
 
PS: 这个django的版本兼容的确是个大问题,哪天升级了下django版本,不经过严格的测试就带来灾难性的后果。

使用pymongo中的find_one_and_update出错:需要分片键

李魔佛 发表了文章 • 0 个评论 • 1627 次浏览 • 2019-06-10 17:13 • 来自相关话题

错误信息如下: File "C:\ProgramData\Anaconda3\lib\site-packages\pymongo\helpers.py", line 155, in _check_command_response
raise OperationFailure(msg % errmsg, code, response)
pymongo.errors.OperationFailure: Query for sharded findAndModify must contain the shard key
2019-06-10 16:14:32 [scrapy.core.engine] INFO: Closing spider (finished)
2019-06-10 16:14:32 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
需要在查询语句中把分片键也添加进去。
因为findOneModify只会找一个记录,但是到底在哪个分片的记录呢? 因为不确定,所以才需要把shard加上去。
 
 
参考官方:
Targeted Operations vs. Broadcast Operations
Generally, the fastest queries in a sharded environment are those that mongos route to a single shard, using the shard key and the cluster meta data from the config server. These targeted operations use the shard key value to locate the shard or subset of shards that satisfy the query document.
For queries that don’t include the shard key, mongos must query all shards, wait for their responses and then return the result to the application. These “scatter/gather” queries can be long running operations.
Broadcast Operations
mongos instances broadcast queries to all shards for the collection unless the mongos can determine which shard or subset of shards stores this data.

After the mongos receives responses from all shards, it merges the data and returns the result document. The performance of a broadcast operation depends on the overall load of the cluster, as well as variables like network latency, individual shard load, and number of documents returned per shard. Whenever possible, favor operations that result in targeted operation over those that result in a broadcast operation.
Multi-update operations are always broadcast operations.
The updateMany() and deleteMany() methods are broadcast operations, unless the query document specifies the shard key in full.
Targeted Operations
mongos can route queries that include the shard key or the prefix of a compound shard key a specific shard or set of shards. mongos uses the shard key value to locate the chunk whose range includes the shard key value and directs the query at the shard containing that chunk.

For example, if the shard key is:
copy
{ a: 1, b: 1, c: 1 }

The mongos program can route queries that include the full shard key or either of the following shard key prefixes at a specific shard or set of shards:
copy
{ a: 1 }
{ a: 1, b: 1 }

All insertOne() operations target to one shard. Each document in the insertMany() array targets to a single shard, but there is no guarantee all documents in the array insert into a single shard.
All updateOne(), replaceOne() and deleteOne() operations must include the shard key or _id in the query document. MongoDB returns an error if these methods are used without the shard key or _id.
Depending on the distribution of data in the cluster and the selectivity of the query, mongos may still perform a broadcast operation to fulfill these queries.
Index Use
If the query does not include the shard key, the mongos must send the query to all shards as a “scatter/gather” operation. Each shard will, in turn, use either the shard key index or another more efficient index to fulfill the query.
If the query includes multiple sub-expressions that reference the fields indexed by the shard key and the secondary index, the mongos can route the queries to a specific shard and the shard will use the index that will allow it to fulfill most efficiently.
Sharded Cluster Security
Use Internal Authentication to enforce intra-cluster security and prevent unauthorized cluster components from accessing the cluster. You must start each mongod or mongos in the cluster with the appropriate security settings in order to enforce internal authentication.
See Deploy Sharded Cluster with Keyfile Access Control for a tutorial on deploying a secured shardedcluster.
Cluster Users
Sharded clusters support Role-Based Access Control (RBAC) for restricting unauthorized access to cluster data and operations. You must start each mongod in the cluster, including the config servers, with the --auth option in order to enforce RBAC. Alternatively, enforcing Internal Authentication for inter-cluster security also enables user access controls via RBAC.
With RBAC enforced, clients must specify a --username, --password, and --authenticationDatabase when connecting to the mongos in order to access cluster resources.
Each cluster has its own cluster users. These users cannot be used to access individual shards.
See Enable Access Control for a tutorial on enabling adding users to an RBAC-enabled MongoDB deployment. 查看全部
错误信息如下:
  File "C:\ProgramData\Anaconda3\lib\site-packages\pymongo\helpers.py", line 155, in _check_command_response
raise OperationFailure(msg % errmsg, code, response)
pymongo.errors.OperationFailure: Query for sharded findAndModify must contain the shard key
2019-06-10 16:14:32 [scrapy.core.engine] INFO: Closing spider (finished)
2019-06-10 16:14:32 [scrapy.statscollectors] INFO: Dumping Scrapy stats:

需要在查询语句中把分片键也添加进去。
因为findOneModify只会找一个记录,但是到底在哪个分片的记录呢? 因为不确定,所以才需要把shard加上去。
 
 
参考官方:
Targeted Operations vs. Broadcast Operations
Generally, the fastest queries in a sharded environment are those that mongos route to a single shard, using the shard key and the cluster meta data from the config server. These targeted operations use the shard key value to locate the shard or subset of shards that satisfy the query document.
For queries that don’t include the shard key, mongos must query all shards, wait for their responses and then return the result to the application. These “scatter/gather” queries can be long running operations.
Broadcast Operations
mongos instances broadcast queries to all shards for the collection unless the mongos can determine which shard or subset of shards stores this data.

After the mongos receives responses from all shards, it merges the data and returns the result document. The performance of a broadcast operation depends on the overall load of the cluster, as well as variables like network latency, individual shard load, and number of documents returned per shard. Whenever possible, favor operations that result in targeted operation over those that result in a broadcast operation.
Multi-update operations are always broadcast operations.
The updateMany() and deleteMany() methods are broadcast operations, unless the query document specifies the shard key in full.
Targeted Operations
mongos can route queries that include the shard key or the prefix of a compound shard key a specific shard or set of shards. mongos uses the shard key value to locate the chunk whose range includes the shard key value and directs the query at the shard containing that chunk.

For example, if the shard key is:
copy
{ a: 1, b: 1, c: 1 }

The mongos program can route queries that include the full shard key or either of the following shard key prefixes at a specific shard or set of shards:
copy
{ a: 1 }
{ a: 1, b: 1 }

All insertOne() operations target to one shard. Each document in the insertMany() array targets to a single shard, but there is no guarantee all documents in the array insert into a single shard.
All updateOne(), replaceOne() and deleteOne() operations must include the shard key or _id in the query document. MongoDB returns an error if these methods are used without the shard key or _id.
Depending on the distribution of data in the cluster and the selectivity of the query, mongos may still perform a broadcast operation to fulfill these queries.
Index Use
If the query does not include the shard key, the mongos must send the query to all shards as a “scatter/gather” operation. Each shard will, in turn, use either the shard key index or another more efficient index to fulfill the query.
If the query includes multiple sub-expressions that reference the fields indexed by the shard key and the secondary index, the mongos can route the queries to a specific shard and the shard will use the index that will allow it to fulfill most efficiently.
Sharded Cluster Security
Use Internal Authentication to enforce intra-cluster security and prevent unauthorized cluster components from accessing the cluster. You must start each mongod or mongos in the cluster with the appropriate security settings in order to enforce internal authentication.
See Deploy Sharded Cluster with Keyfile Access Control for a tutorial on deploying a secured shardedcluster.
Cluster Users
Sharded clusters support Role-Based Access Control (RBAC) for restricting unauthorized access to cluster data and operations. You must start each mongod in the cluster, including the config servers, with the --auth option in order to enforce RBAC. Alternatively, enforcing Internal Authentication for inter-cluster security also enables user access controls via RBAC.
With RBAC enforced, clients must specify a --username, --password, and --authenticationDatabase when connecting to the mongos in order to access cluster resources.
Each cluster has its own cluster users. These users cannot be used to access individual shards.
See Enable Access Control for a tutorial on enabling adding users to an RBAC-enabled MongoDB deployment.