Search

Sunday, 24 February 2019

Powershell Script to change Active Directory account UPN's

Change Search Base to distinguished name of designated OU
Change UPN's In Active Directory
Import-Module ActiveDirectory
Get-ADUser -Filter * -SearchBase 'distinguished name' -Properties userPrincipalName | foreach { Set-ADUser $_ -UserPrincipalName "$($_.givenname + "." + $_.surname)@test.com"}

Single/Multiple SOLR Cloud 7.5 instances and Zookeeper cluster install on Linux

Install Java (Oracle Open JDK)

Centos

sudo yum install java-1.8.0-openjdk.x86_64
sudo java -version

Ubuntu

sudo apt-get update && apt-get upgrade
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:linuxuprising/java
sudo apt-get update
sudo apt-get install oracle-java11-installer

Install LSOF

Centos

sudo yum install lsof

Ubuntu

sudo apt-get install lsof

Disable Firewall

Centos

sudo firewall-cmd --state
sudo systemctl stop firewalld
sudo systemctl disable firewalld

Ubuntu

sudo ufw disable

Install WGET (WGET is installed by default on Ubuntu)

Centos

yum install wget

Edit File and Process limits

Due to the file handling requirements of SOLR, the number of open process and open file limits need to be increased.
Edit the following file by typing:
sudo vi /etc/security/limits.conf
and add this text to the bottom of the file:

* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535

Additionally on Centos the 20-nproc.conf file needs to be edited to change the following file  - 4096 to 535535
sudo vi /etc/security/limits.d/20-nproc.conf

Stage 1 - Install Zookeeper (quorum cluster mode)

Create the Zookeeper user:
sudo groupadd zookeeper
sudo useradd -g zookeeper -d /opt/zookeeper -s /sbin/nologin zookeeper
Download a defined version of Zookeeper from an approved mirror:
Extract the archive and copy it to /opt/zookeeper/:
sudo mv zookeeper-3.4.13.tar.gz /opt
cd /opt
sudo tar xzf zookeeper-3.4.13.tar.gz
sudo mv zookeeper-3.4.13 zookeeper

cd zookeeper

sudo mkdir data

sudo chown -R zookeeper:zookeeper /opt/zookeeper/*

Edit the Zookeeper config file and add all of the servers that will be running Zookeeper (this typically needs to be an odd number):

sudo vi /opt/zookeeper/conf/zoo.cfg

and add the following (replacing your server IP's in the red text):
tickTime=2000 initLimit=10 syncLimit=5 dataDir=/opt/zookeeper/data clientPort=2181 server.1=10.10.10.10:2888:3888 server.2=10.10.10.11.5:2888:3888
Each node in the Zookeeper cluster needs to have an identifier, to do this, we create a file called myid in the zookeeper/data folder on each server. Just create the file and ensure that each server has a unique number that matches the zoo.cfg file. So for example. server.1=10.10.10.10:2888:3888 must have 1 as it's myid:
cd /opt/zookeeper/data

sudo vi myid

modify firewall to allow all port traffic between your Zookeeper/SOLR nodes

Once the firewall rules are in place Zookeeper can be started  Start Zookeeper on each server:
cd /opt/zookeeper/bin/
sudo ./zkServer.sh start

The zkServer.sh scripts accepts other arguments, as well as start. These are stop, restart and status

(Optional) If you wish to turn this Zookeeper installation into a service and you are running init.d

Setup systemd
vi /usr/lib/systemd/system/zookeeper.service
[Unit] Description=Zookeeper Service [Service] Type=simple WorkingDirectory=/opt/zookeeper/ PIDFile=/opt/zookeeper/data/zookeeper_server.pid SyslogIdentifier=zookeeper User=zookeeper Group=zookeeper ExecStart=/opt/zookeeper/bin/zkServer.sh start ExecStop=/opt/zookeeper/bin/zkServer.sh stop Restart=always TimeoutSec=20 SuccessExitStatus=130 143 Restart=on-failure [Install] WantedBy=multi-user.target
Then...
systemctl daemon-reload systemctl enable zookeeper.service systemctl start zookeeper.service
Test again
cd /opt/zookeeper/bin/ sudo ./zkCli.sh

Stage 2 - Install SOLR

Create a data folder in the root on your drive to store indexes
Create a user called solradmin
sudo mkdir /data
copy your SOLR config files onto your linux servers. Extract the compressed files into /home/solradmin/.
copy your solr.xml file (in this case from /home/solradmin) into /data:
sudo cp /home/solradmin/solr.xml /data
create a solrconfig folder in the root of your system:
sudo mkdir /solrconfig

Download and install SOLR

cd /

sudo wget http://apache.org/dist/lucene/solr/7.5.0/solr-7.5.0.tgz

sudo tar xzf solr-7.5.0.tgz solr-7.5.0/bin/install_solr_service.sh --strip-components=2

sudo ./install_solr_service.sh solr-7.5.0.tgz

sudo service solr restart
We need to customise the Solr startup configuration in /etc/default/solr.in.sh and add the following
Note - add all the Zookeeper nodes in the ZK_HOST parameter by IP or name (these are the same servers you added in the Zookeeper config setup, zoo.cfg):
#Replace these IP's below with your own IP's
ZK_HOST="10.10.10.10:2181,10.10.10.11:2181"
SOLR_OPTS="$SOLR_OPTS -Dsun.net.inetaddr.ttl=60 -Dsun.net.inetaddr.negative.ttl=60"
SOLR_HOME=/data
SOLR_HEAP="30g"
SOLR_JAVA_MEM="-Xms30g -Xmx30g"
Copy the rest of your custom config files and folders (listed below) into /solrconfig (in this case from /home/solradmin/config/):
field-aliases.yml schema.xml solrconfig.xml solr.xml velocity folder
sudo cp -r /home/solradmin/config/* /solrconfig
sudo chmod -R 777 /data
If all has gone well you should be able to access the default SOLR instance via a browser on port 8983.

Single or multiple SOLR instances

By default, this installation will run one SOLR instance per server on the default port of 8983. If you wish to run additional instances per server on their own port you will need to use a script similar to the following to force an instance on a new port. Create two relevant shell script text files, called solrstart.sh and solrstop.sh, make them executable and use the following to stop and start:
#In this instance we are using incremental port numbers, up from the default instance. This creates an additional two SOLR instances on every server you run the script
 
#solrstart.sh
sudo ./solr start -force -c -p 8984 -s /data/solr2
sudo ./solr start -force -c -p 8985 -s /data/solr3
 
 
#solrstop.sh
sudo ./solr stop -force -c -p 8984 -s /data/solr2
sudo ./solr stop -force -c -p 8985 -s /data/solr3

CentOS 7 - Automatic security updates

In order to enable automatic download and install of critical security updates, carry out the following:

Log on as root and manually update the server:
yum check-update
yum update

Now install cron and edit conf:
yum update -y && yum install yum-cron -y
 
vi /etc/yum/yum-cron.conf
 
# Edit the following lines to match these values
 
update_cmd = security
update_messages = yes
download_updates = yes
apply_updates = yes
 
emit_via = email
email_from = root@localhost
email_to = root
Please change the values for email to something relevant

Now we start the service:
systemctl start yum-cron
systemctl enable yum-cron


Cron jobs on CentOS

To run a cron job every minute you can create a new file in the folder:
/etc/cron.d

The entry in this file should be in the format:

* * * * * root /home/xxadmin/project/ascript.sh

You need a Line Feed (LF) at the end to make sure that cron can read the entry.  Make sure that it is not a Windows line ending (CRLF)

Centos 7 minimal install with basic puppet setup

 Download the CentOS7 minimal ISO and use it to build at least two VM's that can be configured using Puppet


Set and enable the network and host name - you can also edit the IP settings etc at this point as well, or later with the nmtui command

Set root password

Reboot

Update server
sudo yum -y update

Set IP configuration
nmtui
Set IP address to manual and also specify the bitmask to set the gateway - i.e. 10.0.1.201/21
Set both internal and external Google DNS - 8.8.8.8

Restart networking
service network restart

Check hostname
hostname

Check disks
df

Networking options
Add hostnames to DNS (FQDN to be confirmed)
Add resolves to /etc/hosts
10.0.1.201  stage-db
10.0.1.202  stage-solr
10.0.1.250  puppet

Add and install puppet agent or puppet server (depending on role - most will be puppet agent) repos
yum -y install http://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm
ll /etc/yum.repos.d/

Disable software firewall on all servers
systemctl stop firewalld; systemctl disable firewalld

For server
yum -y install puppet-server

Edit the puppet conf file on master server to set parameters
vi /etc/puppet/puppet.conf

Make changes to the main section and change the highlighted line
[main]
dns_alt_names = puppet, puppet.test.com
certname = puppet
# The Puppet log directory.
# The default value is '$vardir/log'.
logdir = /var/log/puppet
# Where Puppet PID files are kept.
# The default value is '$vardir/run'.
rundir = /var/run/puppet
# Where SSL certificates are kept.
# The default value is '$confdir/ssl'.
ssldir = $vardir/ssl
[agent]
# The file in which puppetd stores a list of the classes
# associated with the retrieved configuratiion. Can be loaded in
# the separate ``puppet`` executable using the ``--loadclasses``
# option.
# The default value is '$confdir/classes.txt'.
classfile = $vardir/classes.txt
# Where puppetd caches the local configuration. An
# extension indicating the cache format is added automatically.
# The default value is '$confdir/localconfig'.
localconfig = $vardir/localconfig

Generate Certs on the master puppet server
sudo puppet master --no-daemonize --verbose
PRESS CTRL + C

Start the puppet server
systemctl start puppetmaster

and ensure service starts automatically
systemctl enable puppetmaster

For agent
yum -y install puppet

Edit the puppet conf file on agent servers to talk to puppet master server
vi /etc/puppet/puppet.conf

Make changes to the agent section and change the highlighted line
[main]
# The Puppet log directory.
# The default value is '$vardir/log'.
logdir = /var/log/puppet
# Where Puppet PID files are kept.
# The default value is '$vardir/run'.
rundir = /var/run/puppet
# Where SSL certificates are kept.
# The default value is '$confdir/ssl'.
ssldir = $vardir/ssl
[agent]
server = puppet
# The file in which puppetd stores a list of the classes
# associated with the retrieved configuration. Can be loaded in
# the separate ``puppet`` executable using the ``--loadclasses``
# option.
# The default value is '$confdir/classes.txt'.
classfile = $vardir/classes.txt
# Where puppetd caches the local configuration. An
# extension indicating the cache format is added automatically.
# The default value is '$confdir/localconfig'.
localconfig = $vardir/localconfig

Sign Certs on the puppet agent server
puppet agent -t

Once all servers (apart from puppet master, have signed, then return to the puppet master server to acknowledge the signing requests
puppet cert list

Then, one by one, sign each cert with - puppet cert sign , for example
puppet cert sign stage-db

start and enable the pupper agents on each server
systemctl start puppet
systemctl enable puppet

Check connectivity with
puppet agent -t

Create a puppet manifest file on the puppet server to make sure changes are pushed to all servers
cd /etc/puppet/manifests
vi site.pp
Paste in this basic config
node default {
file {'/etc/purple':
content => 'This is an Purple test',
}
}
Log into another server and check whether the content has been pulled
puppet agent -t

Using Docker to run multiple instances of TinyProxy

  • install ubuntu 14.04 - 64 bit or later
  • Ideally set a static IP, gateway and Google DNS (8.8.8.8) before continuing
  • Go through Docker installation procedure https://docs.docker.com/engine/installation/linux/ubuntulinux/
  • Update package information, ensure that APT works with the https method, and that CA certificates are installed.
$ sudo apt-get update $ sudo apt-get install apt-transport-https ca-certificates
  • Add the new GPG key.
$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
$ sudo nano /etc/apt/sources.list.d/docker.list
$ sudo apt-get update
  • Purge the old repo if it exists.
$ sudo apt-get purge lxc-docker
  • Verify that APT is pulling from the right repository.
$ apt-cache policy docker-engine
  • Prerequisites by Ubuntu Version¶ Ubuntu Xenial 16.04 (LTS) Ubuntu Wily 15.10 Ubuntu Trusty 14.04 (LTS) For Ubuntu Trusty, Wily, and Xenial, it’s recommended to install the linux-image-extra-* kernel packages. The linux-image-extra-* packages allows you use the aufs storage driver.
  • To install the linux-image-extra-* packages: Open a terminal on your Ubuntu host. Update your package manager.
$ sudo apt-get update Install the recommended packages. $ sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
  • Install Make sure you have installed the prerequisites for your Ubuntu version.
  • Then, install Docker using the following:
  • Log into your Ubuntu installation as a user with sudo privileges. Update your APT package index.
$ sudo apt-get update
  • Install Docker
$ sudo apt-get install docker-engine
  • Start the docker daemon.
$ sudo service docker start
  • Verify docker is installed correctly.
$ sudo docker run hello-world
  • This command downloads a test image and runs it in a container. When the container runs, it prints an informational message. Then, it exits.
  • Find out what is running
$ sudo docker ps
  • Choose a container to download and from DockerHub - https://hub.docker.com/explore/
  • In our use case we require 'TinyProxy'
  • We can download and run the container with one command. At the same time we can specify the NIC and port we wish to bind our container to. You can run multiple containers on different IP's and ports.
  • In the example below we spawn a new tinyproxy instance with the unique name 'tinyproxy1' on the IP 10.0.2.200 and we map the internal docker port of 8080 to the published port of 9090.
$ sudo dcker run -d --name='tinyproxy1' -p 10.0.1.200:9090:8080 dannydirect/tinyproxy:latest ANY
  • You can spawn as many of these as your hardware permits, as long as they have a unique name, IP and port mapping