When I create a custom role (while connected to mongos):
db.createRole({
role: "sysad",
privileges: [
{
resource: {db: "admin",collection:""},
actions: ["replSetStateChange","replSetGetStatus"]
}
],
roles:["clusterAdmin"]
})
It creates the role in the “admin” database, but not across the cluster. If I understand this, do I need to run this create role command on the primary shard config database + each primary shard?
Same question for when I create the user with this role.
I’m a little confused, I thought it would create this role across the cluster and when I created the user, I thought it would also be cluster wide.
GOAL: I wanted to allow sysadmins the ability to login, see cluster status and be able to stepDown a primary in order to reboot that vm. When I create the role/user, and actually try it out, it works great.
(I’m using mongo 6 if that makes any difference).