diff --git a/mahara-autoscale-cache/README.md b/mahara-autoscale-cache/README.md new file mode 100644 index 000000000000..877c235f19cb --- /dev/null +++ b/mahara-autoscale-cache/README.md @@ -0,0 +1,133 @@ +# *Deploy and manage a Scalable Mahara Cluster on Azure* + +After deploying, these templates will provide you with a new Mahara site with caching for speed and scaling frontends to handle PHP load. The filesystem behind it is mirrored for high availability. Filesystem permissions and options have been tuned to make Mahara more secure than the default install. + +[![Deploy to Azure Minimally](http://azuredeploy.net/deploybutton.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com/Azure/azure-quickstart-templates/master/mahara-autoscale-cache/azuredeploy.json) [![Visualize](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/visualizebutton.png)](http://armviz.io/#/?load=https%3A%2F%2Fraw.githubusercontent.com/Azure/azure-quickstart-templates/master/mahara-autoscale-cache/azuredeploy.json) + +`Tags: cluster, ha, mahara, autoscale, linux, ubuntu` + + +## *What this stack will give you* + +This template set deploys the following infrastructure: +- Autoscaling web frontend layer (Nginx for https termination, Varnish for caching, Apache/php or nginx/php-fpm) +- Private virtual network for frontend instances +- Controller instance running cron and handling syslog for the autoscaled site +- Load balancer to balance across the autoscaled instances +- [Azure Database for MySQL](https://azure.microsoft.com/en-us/services/mysql/) or [Azure Database for PostgreSQL](https://azure.microsoft.com/en-us/services/postgresql/) +- Three Elasticsearch VMs for search indexing in Mahara (optional)* +- Dual Gluster nodes for high availability access to Mahara files + +* Note: You will need to login into Mahara as 'admin' and configure the artifacts you want indexed after the installation has completed. + +![network_diagram](images/stack_diagram.png "Diagram of deployed stack") + +## *Deployment steps* + +You can click the "deploy to Azure" button at the beginning of this document. + +## *Using the created stack* + +In testing, stacks typically took between 1 and 1 and a half hours to finish, depending on spec. Once this is done, you will receive JSON data with outputs needed to continue setup. You can also retrieve these from the portal or the CLI, more information below. The available parameters are: + +- siteURL: If you provided a `siteURL` parameter when deploying this will be set to the supplied value. Otherwise it will be the same as the loadBalancerDNS, see below. +- loadBalancerDNS: This is the address of your load balancer. If you provided a `siteURL` parameter when deploying you'll need to add a DNS entry CNAMEs to this. +- maharaAdminPassword: The password for the "admin" user in your Mahara install. +- controllerinstanceIP: This is the address of the controller. You will need to SSH into this to make changes to your Mahara code or view logs. +- databaseDNS: This is the public DNS of your database instance. If you wish to set up local backups or access the db directly, you'll need to use this. +- databaseAdminUsername: The master account (not Mahara) username for your database. +- databaseAdminPassword: The master account password for your database. + +Once Mahara has been created, and (if necessary) with your custom `siteURL` DNS pointing to the load balancer, you should be able to load the `siteURL` and login with "admin" and the password suppliedin the maharaAdminPassword. + +#### Retrieving Deployment Configuration + +The outputs provided by your deployment should include everything you need to manage your Mahara deployment. These are available in the portal by clicking on the deployment for your resource group. They are also available via the Azure CLI. For example: + +Retrieve all the outputs in JSON format: + +``` +az group deployment show --resource-group $MAHARA_RG_NAME --name $MAHARA_DEPLOYMENT_NAME --out json --query *.outputs +``` + +Retrieve just the database password: + +``` +az group deployment show --resource-group $MAHARA_RG_NAME --name $MAHARA_DEPLOYMENT_NAME --out tsv --query *.outputs.databaseAdminPassword.value +``` + +Retrieve the public URL (if you did not provide your own URL): + +``` +az group deployment show --resource-group $MAHARA_RG_NAME --name $MAHARA_DEPLOYMENT_NAME --out tsv --query *.outputs.siteURL.value +``` + +### *Updating Mahara code/settings* + +Your controller VM has Mahara code and data stored on /mahara. The code is stored in /mahara/html/mahara/. This is also mounted to your autoscaled frontends so all changes are instance. Depending on how large your Gluster disks are sized, it may be helpful to keep multiple older versions (/mahara/html1,/mahara/html2, etc) to roll back if needed. + +### *Getting an SQL dump* + +A daily SQL dump of your database is taken at 02:22 and saved to /mahara/db-backup.sql(.gz). If your database is small enough to fit, you may be able to get a more current SQL dump of your Mahara db by dumping it to /mahara/. Otherwise, you'll want to do this remotely by connecting to the hostname shown in the database-dns output using the database-admin-username and database-admin-password. + +While Azure does not currently backup up Postgres/MySQL database, by dumping it to /mahara it is included in the Gluster VM backups should you enable Recovery Services in your parameters. + +### *Azure Recovery Services* + +If you have set azureBackupSwitch to 1 then Azure will provide VM backups of your Gluster node. This is recommended as it contains both your Mahara code and your sitedata. Restoring a backed up VM is outside the scope of this dos, but Azure's documentation on Recovery Services can be found here: https://docs.microsoft.com/en-us/azure/backup/backup-azure-vms-first-look-arm + + +### *Resizing your database* + +Note: This involves a lengthy site downtime. + +As mentioned above, Azure does not currently support resizing database. You can, however, create a new database instance and change your config to point to that. To get a different size database you'll need to: + +1. Place your Mahara site into maintenance mode. You can do this either via the web interface or the command line on the controller VM. +2. Perform an SQL dump of your database, either to /mahara or remotely to your machine. +3. Create a new Azure database of the size you want inside your existing resource group. +4. Using the details in your /mahara/html/mahara/htdocs/config.php create a new user and database matching the details in config.php. Make sure to grant all rights on the db to the user. +5. On the controller instance, change the db setting in /mahara/html/mahara/htdocs/config.php to point to the new database. +6. Take Mahara out of maintenance mode. +7. Once confirmed working, delete the previous database instance. + +How long this takes depends entirely on the size your database and the speed of your VM tier. It will always be a large enough window to make a noticeable outage. + +### *Change the SSL cert* + +The self-signed certificate generated by the template is suitable for very basic testing, but a public website will require a real certificate. After purchasing a trusted certificate, it can be copied to the following files to be ready immediately: + +- /mahara/certs/nginx.key: Your certificate's private key +- /mahara/certs/nginx.crt: Your combined signed certificate and trust chain certificate(s). + +Once replace these changes become effective immediately. + +### *Sizing Considerations and Limitations* + +Depending on what you're doing with Mahara, there are several considerations to make when configuring. The defaults included produce a cluster that is inexpensive but probably too low spec to use beyond single-user Mahara testing. + +It should be noted that as of the time of writing both Postgres and MySQL databases are in preview at Azure. In the future larger DB sizes for different VM sizes will be available. The templates will allow you to select whatever sizeyou want, but there are restrictions in place (VMs with certain storage types, disk size for database tiers, etc) that may prevent certain selections from working together. + +### *Database sizing* + +As of the time of writing, Azure supports "Basic" and "Standard" tiers for database instances. In addition the skuCapacityDTU defines Compute Units, and the number of those you can use is limited by databas tier: + +- Basic: 50, 100 +- Standard: 100, 200, 400 + +This value also limits the maximum number of connections, as defined here: https://docs.microsoft.com/en-us/azure/mysql/concepts-limits + +As the Mahara database will handle cron processes as well as the website, any public facing websites with than 10 users will likely require upgrading to 100. Once the site reaches 30+ users it will require upgrading to Standard for more compute units. This depends entirely on the individual site. As MySQL databases cannot change (or be restored to a different tier) once deployed it is a good idea to slightly overspec your database. + +Standard instances have a minimum storage requirement of 128GB. All database storage, regardless of tier, has a hard upper limit of 1 terrabyte. After 128GB you gain additional iops for each GB, so if you're expecting a heavy amount of traffic you will want to oversize your storage. The current maximum iops with a 1TB disk is 3000. + +### *Controller instance sizing* + +The controller handles both syslog and cron duties. Depending on how big you Mahara cron runs are this may not be sufficent. If cron jobs are delayed and cron processess are building up on the controller then an upgrade in tier is needed. + +### *Frontend instances* + +In general the frontend instances will not be the source of any bottlenecs unless they are severly undersized versus the rest of the cluster. More powerful instances will be needed should fpm processess spawn and exhaust memory during periods of heavy site load. This can also be mitigated against by increasing the number of VMs but spawning new VMs is slower (and potentially more expensive) than having that capacity already available. + +It is worth noting that the memory allowances on these instances allow for more memory than they may be able to provide with lower instance tiers. This is intentional as you can opt to run larger VMs with more memory and not require manual configuration. FPM also allows for a very large number of threads with prevents the systerm from failing during many small jobs. + diff --git a/mahara-autoscale-cache/azuredeploy.json b/mahara-autoscale-cache/azuredeploy.json new file mode 100644 index 000000000000..38105fbe9502 --- /dev/null +++ b/mahara-autoscale-cache/azuredeploy.json @@ -0,0 +1,721 @@ +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "_artifactsLocation": { + "type": "string", + "metadata": { + "description": "The base URI where artifacts required by this template are located. When the template is deployed using the accompanying scripts, a private location in the subscription will be used and this value will be automatically generated." + }, + "defaultValue": "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/mahara-autoscale-cache/" + }, + "_artifactsLocationSasToken": { + "type": "securestring", + "metadata": { + "description": "The sasToken required to access _artifactsLocation. When the template is deployed using the accompanying scripts, a sasToken will be automatically generated." + }, + "defaultValue": "" + }, + "applyScriptsSwitch": { + "defaultValue": true, + "metadata": { + "description": "Switch to process or bypass all scripts/extensions" + }, + "type": "bool" + }, + "location": { + "type": "string", + "defaultValue": "[resourceGroup().location]", + "metadata": { + "description": "Location for all resources" + } + }, + "azureBackupSwitch": { + "defaultValue": false, + "metadata": { + "description": "Switch to configure AzureBackup and enlist VM's" + }, + "type": "bool" + }, + "vnetGwDeploySwitch": { + "defaultValue": false, + "metadata": { + "description": "Switch to deploy a virtual network gateway or not" + }, + "type": "bool" + }, + "htmlLocalCopySwitch": { + "defaultValue": true, + "metadata": { + "description": "Switch to create a local copy of /mahara/html or not" + }, + "type": "bool" + }, + "httpsTermination": { + "allowedValues": [ + "VMSS", + "None" + ], + "defaultValue": "VMSS", + "metadata": { + "description": "Indicates where https termination occurs. 'VMSS' is for https termination at the VMSS instance VMs (using nginx https proxy). 'None' is for testing only with no https. 'None' may not be used with a separately configured https termination layer. If you want to use the 'None' option with your separately configured https termination layer, you'll need to update your Mahara config.php manually for $cfg->wwwroot and $cfg->sslproxy." + }, + "type": "string" + }, + "siteURL": { + "defaultValue": "www.example.org", + "metadata": { + "description": "URL for Mahara site" + }, + "type": "string" + }, + "maharaVersion": { + "allowedValues": [ + "17.10_STABLE", + "17.04_STABLE" + ], + "defaultValue": "17.10_STABLE", + "metadata": { + "description": "The Mahara version you want to install." + }, + "type": "string" + }, + "sshPublicKey": { + "metadata": { + "description": "ssh public key" + }, + "type": "string" + }, + "sshUsername": { + "metadata": { + "description": "ssh user name" + }, + "type": "string" + }, + "controllerVmSku": { + "defaultValue": "Standard_DS1_v2", + "metadata": { + "description": "VM size for the controller VM" + }, + "type": "string" + }, + "webServerType": { + "defaultValue": "apache", + "allowedValues": [ + "apache", + "nginx" + ], + "metadata": { + "description": "Web server type" + }, + "type": "string" + }, + "autoscaleVmSku": { + "defaultValue": "Standard_DS2_v2", + "metadata": { + "description": "VM size for autoscaled web VMs" + }, + "type": "string" + }, + "autoscaleVmCount": { + "defaultValue": 10, + "metadata": { + "description": "Maximum number of autoscaled web VMs" + }, + "type": "int" + }, + "dbServerType": { + "defaultValue": "mysql", + "allowedValues": [ + "postgres", + "mysql" + ], + "metadata": { + "description": "Database type" + }, + "type": "string" + }, + "dbLogin": { + "metadata": { + "description": "Database admin username" + }, + "type": "string" + }, + "mysqlPgresVcores": { + "allowedValues": [ + 1, + 2, + 4, + 8, + 16, + 32 + ], + "defaultValue": 2, + "metadata": { + "description": "MySql/Postgresql vCores. For Basic tier, only 1 & 2 are allowed. For GeneralPurpose tier, 2, 4, 8, 16, 32 are allowed. For MemoryOptimized, 2, 4, 8, 16 are allowed." + }, + "type": "int" + }, + "mysqlPgresStgSizeGB": { + "defaultValue": 125, + "minValue": 5, + "maxValue": 1024, + "metadata": { + "description": "MySql/Postgresql storage size in GB. Minimum 5GB, increase by 1GB, up to 1TB (1024 GB)" + }, + "type": "int" + }, + "mysqlPgresSkuTier": { + "allowedValues": [ + "Basic", + "GeneralPurpose", + "MemoryOptimized" + ], + "defaultValue": "GeneralPurpose", + "metadata": { + "description": "MySql/Postgresql sku tier" + }, + "type": "string" + }, + "mysqlPgresSkuHwFamily": { + "allowedValues": [ + "Gen4", + "Gen5" + ], + "defaultValue": "Gen4", + "metadata": { + "description": "MySql/Postgresql sku hardware family" + }, + "type": "string" + }, + "mysqlVersion": { + "allowedValues": [ + "5.6", + "5.7" + ], + "defaultValue": "5.7", + "metadata": { + "description": "Mysql version" + }, + "type": "string" + }, + "postgresVersion": { + "allowedValues": [ + "9.5", + "9.6" + ], + "defaultValue": "9.6", + "metadata": { + "description": "Postgresql version" + }, + "type": "string" + }, + "sslEnforcement": { + "allowedValues": [ + "Disabled", + "Enabled" + ], + "defaultValue": "Disabled", + "metadata": { + "description": "MySql/Postgresql SSL connection" + }, + "type": "string" + }, + "fileServerType": { + "defaultValue": "nfs", + "allowedValues": [ + "gluster", + "nfs" + ], + "metadata": { + "description": "File server type: GlusterFS, NFS--not yet highly available. Gluster uses premium managed disks therefore premium skus are required." + }, + "type": "string" + }, + "fileServerDiskSize": { + "defaultValue": 127, + "metadata": { + "description": "Size per disk for gluster nodes or nfs server" + }, + "type": "int" + }, + "fileServerDiskCount": { + "defaultValue": 4, + "minValue": 2, + "maxValue": 8, + "metadata": { + "description": "Number of disks in raid0 per gluster node or nfs server" + }, + "type": "int" + }, + "glusterVmSku": { + "defaultValue": "Standard_DS2_v2", + "metadata": { + "description": "VM size for the gluster nodes" + }, + "type": "string" + }, + "keyVaultResourceId": { + "defaultValue": "", + "metadata": { + "description": "Azure Resource Manager resource ID of the Key Vault in case you stored your SSL cert in an Azure Key Vault (Note that this Key Vault must have been pre-created on the same Azure region where this template is being deployed). Leave this blank if you didn't. Resource ID example: /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/xxx/providers/Microsoft.KeyVault/vaults/yyy. This value can be obtained from keyvault.sh output if you used the script to store your SSL cert in your Key Vault." + }, + "type": "string" + }, + "sslCertKeyVaultURL": { + "defaultValue": "", + "metadata": { + "description": "Azure Key Vault URL for your stored SSL cert. This value can be obtained from keyvault.sh output if you used the script to store your SSL cert in your Key Vault. This parameter is ignored if the keyVaultResourceId parameter is blank." + }, + "type": "string" + }, + "sslCertThumbprint": { + "defaultValue": "", + "metadata": { + "description": "Thumbprint of your stored SSL cert. This value can be obtained from keyvault.sh output if you used the script to store your SSL cert in your Key Vault. This parameter is ignored if the keyVaultResourceId parameter is blank." + }, + "type": "string" + }, + "caCertKeyVaultURL": { + "defaultValue": "", + "metadata": { + "description": "Azure Key Vault URL for your stored CA (Certificate Authority) cert. This value can be obtained from keyvault.sh output if you used the script to store your CA cert in your Key Vault. This parameter is ignored if the keyVaultResourceId parameter is blank." + }, + "type": "string" + }, + "caCertThumbprint": { + "defaultValue": "", + "metadata": { + "description": "Thumbprint of your stored CA cert. This value can be obtained from keyvault.sh output if you used the script to store your CA cert in your Key Vault. This parameter is ignored if the keyVaultResourceId parameter is blank." + }, + "type": "string" + }, + "storageAccountType": { + "defaultValue": "Standard_LRS", + "allowedValues": [ + "Standard_LRS", + "Standard_GRS", + "Standard_ZRS" + ], + "metadata": { + "description": "Storage Account type" + }, + "type": "string" + }, + "searchType": { + "defaultValue": "none", + "allowedValues": [ + "none", + "elastic" + ], + "metadata": { + "description": "options of mahara global search" + }, + "type": "string" + }, + "elasticVmSku": { + "defaultValue": "Standard_DS2_v2", + "metadata": { + "description": "VM size for the elastic search nodes" + }, + "type": "string" + }, + "vNetAddressSpace": { + "defaultValue": "172.31.0.0", + "metadata": { + "description": "Address range for the Mahara virtual network - presumed /16 - further subneting during vnet creation" + }, + "type": "string" + }, + "gatewaySubnet": { + "allowedValues": [ + "GatewaySubnet", + "MaharaGatewaySubnet", + "MyMaharaGatewaySubnet" + ], + "defaultValue": "GatewaySubnet", + "metadata": { + "description": "name for Virtual network gateway subnet" + }, + "type": "string" + }, + "gatewayType": { + "allowedValues": [ + "Vpn", + "ER" + ], + "defaultValue": "Vpn", + "metadata": { + "description": "Virtual network gateway type" + }, + "type": "string" + }, + "vpnType": { + "allowedValues": [ + "RouteBased", + "PolicyBased" + ], + "defaultValue": "RouteBased", + "metadata": { + "description": "Virtual network gateway vpn type" + }, + "type": "string" + } + }, + "resources": [ + { + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "dependsOn": [ + "Microsoft.Resources/deployments/networkTemplate" + ], + "name": "dbTemplate", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[variables('maharaCommon')]" + }, + "lbPubIp": { + "value": "[reference('networkTemplate').outputs.lbPubIp.value]" + }, + "ctlrPubIp": { + "value": "[reference('networkTemplate').outputs.ctlrPubIp.value]" + } + }, + "templateLink": { + "uri": "[concat(variables('maharaCommon').baseTemplateUrl, parameters('dbServerType'), '.json', parameters('_artifactsLocationSasToken'))]" + } + } + }, + { + "condition": "[parameters('azureBackupSwitch')]", + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "name": "recoveryTemplate", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[variables('maharaCommon')]" + } + }, + "templateLink": { + "uri": "[concat(variables('maharaCommon').baseTemplateUrl,'recoveryservices.json',parameters('_artifactsLocationSasToken'))]" + } + } + }, + { + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "name": "networkTemplate", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[variables('maharaCommon')]" + } + }, + "templateLink": { + "uri": "[concat(variables('maharaCommon').baseTemplateUrl,'network.json',parameters('_artifactsLocationSasToken'))]" + } + } + }, + { + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "dependsOn": [ + "Microsoft.Resources/deployments/networkTemplate", + "Microsoft.Resources/deployments/recoveryTemplate" + ], + "name": "searchTemplate", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[variables('maharaCommon')]" + } + }, + "templateLink": { + "uri": "[concat(variables('maharaCommon').baseTemplateUrl, parameters('searchType'), '-search.json', parameters('_artifactsLocationSasToken'))]" + } + } + }, + { + "condition": "[equals(parameters('fileServerType'),'gluster')]", + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "dependsOn": [ + "Microsoft.Resources/deployments/networkTemplate", + "Microsoft.Resources/deployments/recoveryTemplate" + ], + "name": "glusterTemplate", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[variables('maharaCommon')]" + } + }, + "templateLink": { + "uri": "[concat(variables('maharaCommon').baseTemplateUrl,'gluster.json',parameters('_artifactsLocationSasToken'))]" + } + } + }, + { + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "dependsOn": [ + "Microsoft.Resources/deployments/glusterTemplate", + "Microsoft.Resources/deployments/recoveryTemplate", + "Microsoft.Resources/deployments/networkTemplate", + "Microsoft.Resources/deployments/dbTemplate", + "Microsoft.Resources/deployments/searchTemplate", + "Microsoft.Resources/deployments/storageAccountTemplate" + ], + "name": "controllerTemplate", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[variables('maharaCommon')]" + }, + "ctlrPubIpId": { + "value": "[reference('networkTemplate').outputs.ctlrPubIpId.value]" + }, + "siteFQDN": { + "value": "[reference('networkTemplate').outputs.siteFQDN.value]" + } + }, + "templateLink": { + "uri": "[concat(variables('maharaCommon').baseTemplateUrl,'controller.json',parameters('_artifactsLocationSasToken'))]" + } + } + }, + { + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "dependsOn": [ + "Microsoft.Resources/deployments/controllerTemplate", + "Microsoft.Resources/deployments/networkTemplate", + "Microsoft.Resources/deployments/dbTemplate" + ], + "name": "scaleSetTemplate", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[variables('maharaCommon')]" + }, + "siteFQDN": { + "value": "[reference('networkTemplate').outputs.siteFQDN.value]" + } + }, + "templateLink": { + "uri": "[concat(variables('maharaCommon').baseTemplateUrl,'webvmss.json',parameters('_artifactsLocationSasToken'))]" + } + } + }, + { + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "name": "storageAccountTemplate", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[variables('maharaCommon')]" + } + }, + "templateLink": { + "uri": "[concat(variables('maharaCommon').baseTemplateUrl,'storageAccount.json',parameters('_artifactsLocationSasToken'))]" + } + } + } + ], + "outputs": { + "siteURL": { + "type": "string", + "value": "[if(equals(variables('maharaCommon').siteURL,'www.example.org'),reference('networkTemplate').outputs.siteFQDN.value,'www.example.org')]" + }, + "controllerInstanceIP": { + "type": "string", + "value": "[reference('controllerTemplate').outputs.controllerIP.value]" + }, + "databaseDNS": { + "type": "string", + "value": "[variables('maharaCommon').dbDNS]" + }, + "databaseAdminUsername": { + "type": "string", + "value": "[variables('maharaCommon').dbUsername]" + }, + "databaseAdminPassword": { + "type": "string", + "value": "[variables('maharaCommon').dbLoginPassword]" + }, + "firstFrontendVmIP": { + "type": "string", + "value": "[reference('scaleSetTemplate').outputs.webvm1IP.value]" + }, + "maharaAdminPassword": { + "type": "string", + "value": "[variables('maharaCommon').maharaAdminPass]" + }, + "maharaDbUsername": { + "type": "string", + "value": "[variables('maharaCommon').maharaDbUserAzure]" + }, + "maharaDbPassword": { + "type": "string", + "value": "[variables('maharaCommon').maharaDbPass]" + }, + "sshUsername": { + "type": "string", + "value": "[variables('maharaCommon').sshUsername]" + }, + "loadBalancerDNS": { + "type": "string", + "value": "[reference('networkTemplate').outputs.siteFQDN.value]" + } + }, + "variables": { + "documentation01": "This main-template calls multiple sub-templates to create the mahara system", + "documentation02": " recoveryservices0 - dummy template (see next statement)", + "documentation03": " recoveryservices1 - creates a recovery vault that will be subsequently used by the VM Backup - a paramter swtich controls whethe is is called or bypassed", + "documentation04": " postgres / mysql - creates a postgresql / mysql server", + "documentation05": " vnet - creates a virtual network with three subnets", + "documentation0j": " elastic - creates a elastic search cluster on a vm farm", + "documentation07": " gluster - creates a gluster file system on a vm farm", + "documentation08": " webvmss - creates a vm scale set", + "documentation09": " controller - creates a controller VM and deploys code", + "documentation10": "GlusterFS Sizing guidance", + "maharaCommon": { + "location": "[parameters('location')]", + "baseTemplateUrl": "[concat(parameters('_artifactsLocation'), 'nested/')]", + "scriptLocation": "[concat(parameters('_artifactsLocation'), 'scripts/')]", + "artifactsSasToken": "[parameters('_artifactsLocationSasToken')]", + + "applyScriptsSwitch": "[parameters('applyScriptsSwitch')]", + "autoscaleVmCount": "[parameters('autoscaleVmCount')]", + "autoscaleVmSku": "[parameters('autoscaleVmSku')]", + "azureBackupSwitch": "[parameters('azureBackupSwitch')]", + "commonFunctionsScriptUri": "[concat(parameters('_artifactsLocation'),'scripts/helper_functions.sh',parameters('_artifactsLocationSasToken'))]", + "controllerVmSku": "[parameters('controllerVmSku')]", + "dbLogin": "[parameters('dbLogin')]", + "dbLoginPassword": "[concat(substring(uniqueString(resourceGroup().id, deployment().name), 2, 11), '*7', toUpper('pfiwb'))]", + "dbServerType": "[parameters('dbServerType')]", + "dbUsername": "[concat(parameters('dbLogin'), '@', parameters('dbServerType'), '-', variables('resourceprefix'))]", + "elasticVmSku": "[parameters('elasticVmSku')]", + "dbDNS": "[concat(parameters('dbServerType'), '-', variables('resourcePrefix'), '.', parameters('dbServerType'), '.database.azure.com')]", + "elasticAvailabilitySetName": "[concat('elastic-avset-',variables('resourceprefix'))]", + "elasticClusterName": "[concat('es-cluster-',variables('resourceprefix'))]", + "elasticNicName1": "[concat('elastic-vm-nic-01-',variables('resourceprefix'))]", + "elasticNicName2": "[concat('elastic-vm-nic-02-',variables('resourceprefix'))]", + "elasticNicName3": "[concat('elastic-vm-nic-03-',variables('resourceprefix'))]", + "elasticScriptFilename": "install_elastic.sh", + "elasticVm1IP": "[concat( variables('octets')[0], '.', variables('octets')[1], '.', string(add(int(variables('octets')[2]),4)), '.20')]", + "elasticVm2IP": "[concat( variables('octets')[0], '.', variables('octets')[1], '.', string(add(int(variables('octets')[2]),4)), '.21')]", + "elasticVm3IP": "[concat( variables('octets')[0], '.', variables('octets')[1], '.', string(add(int(variables('octets')[2]),4)), '.22')]", + "elasticVmName": "[concat('elastic-vm-',variables('resourceprefix'))]", + "elasticVmName1": "[concat('elastic-vm-01-',variables('resourceprefix'))]", + "elasticVmName2": "[concat('elastic-vm-02-',variables('resourceprefix'))]", + "elasticVmName3": "[concat('elastic-vm-03-',variables('resourceprefix'))]", + "extBeName": "[concat('lb-backend-',variables('resourceprefix'))]", + "extFeName": "[concat('lb-frontend-',variables('resourceprefix'))]", + "extNatPool": "[concat('lb-natpool-',variables('resourceprefix'))]", + "extProbe": "[concat('lb-probe-',variables('resourceprefix'))]", + "fileServerDiskCount": "[parameters('fileServerDiskCount')]", + "fileServerDiskSize": "[parameters('fileServerDiskSize')]", + "fileServerType": "[parameters('fileServerType')]", + "gatewayName": "[concat('vnet-gateway-',variables('resourceprefix'))]", + "gatewayPublicIPName": "[concat('vnet-gw-ip-',variables('resourceprefix'))]", + "gatewaySubnet": "[parameters('gatewaySubnet')]", + "gatewaySubnetPrefix": "[concat(variables('octets')[0], '.', variables('octets')[1], '.', string(add(int(variables('octets')[2]),2)))]", + "gatewaySubnetRange": "[concat(variables('octets')[0], '.', variables('octets')[1], '.', string(add(int(variables('octets')[2]),2)), '.0/24')]", + "gatewayType": "[parameters('gatewayType')]", + "gfsNameRoot": "[concat('gluster-vm-',variables('resourceprefix'))]", + "gfxAvailabilitySetName": "[concat('gluster-avset-',variables('resourceprefix'))]", + "glusterScriptFilename": "install_gluster.sh", + "glusterVmCount": 2, + "glusterVmSku": "[parameters('glusterVmSku')]", + "htmlLocalCopySwitch": "[parameters('htmlLocalCopySwitch')]", + "httpsTermination": "[parameters('httpsTermination')]", + "ctlrNicName": "[concat('controller-vm-nic-',variables('resourceprefix'))]", + "ctlrNsgName": "[concat('controller-nsg-',variables('resourceprefix'))]", + "ctlrPipName": "[concat('controller-pubip-',variables('resourceprefix'))]", + "ctlrVmName": "[concat('controller-vm-',variables('resourceprefix'))]", + "ctlrVmSecrets": "[take(variables('ctlrVmSecretsArray'), if(empty(parameters('keyVaultResourceId')), 0, 1))]", + "lbName": "[concat('lb-',variables('resourceprefix'))]", + "lbPipName": "[concat('lb-pubip-',variables('resourceprefix'))]", + "maharaAdminPass": "[concat(toUpper('xl'), substring(uniqueString(resourceGroup().id, deployment().name), 6, 7),',1*8')]", + "maharaDbName": "mahara", + "maharaDbPass": "[concat('9#36^', substring(uniqueString(resourceGroup().id, deployment().name), 5, 8), toUpper('ercq'))]", + "maharaDbUser": "mahara", + "maharaDbUserAzure": "[concat('mahara', '@', parameters('dbServerType'), '-', variables('resourceprefix'))]", + "maharaInstallScriptFilename": "install_mahara.sh", + "maharaVersion": "[parameters('maharaVersion')]", + "mysqlPgresSkuHwFamily": "[parameters('mysqlPgresSkuHwFamily')]", + "mysqlPgresSkuName": "[concat(if(equals(parameters('mysqlPgresSkuTier'),'Basic'),'B', if(equals(parameters('mysqlPgresSkuTier'),'GeneralPurpose'),'GP', 'MO')), '_', parameters('mysqlPgresSkuHwFamily'), '_', string(parameters('mysqlPgresVcores')))]", + "mysqlPgresSkuTier": "[parameters('mysqlPgresSkuTier')]", + "mysqlPgresStgSizeGB": "[parameters('mysqlPgresStgSizeGB')]", + "mysqlPgresVcores": "[parameters('mysqlPgresVcores')]", + "mysqlVersion": "[parameters('mysqlVersion')]", + "osType": { + "offer": "UbuntuServer", + "publisher": "Canonical", + "sku": "16.04-LTS", + "version": "latest" + }, + "policyName": "[concat('policy-',variables('resourceprefix'))]", + "postgresVersion": "[parameters('postgresVersion')]", + "resourcesPrefix": "[variables('resourceprefix')]", + "searchType": "[parameters('searchType')]", + "serverName": "[concat(parameters('dbServerType'), '-',variables('resourceprefix'))]", + "siteURL": "[parameters('siteURL')]", + "sshPublicKey": "[parameters('sshPublicKey')]", + "sshUsername": "[parameters('sshUsername')]", + "sslEnforcement": "[parameters('sslEnforcement')]", + "storageAccountName": "[tolower(concat('abs',variables('resourceprefix')))]", + "storageAccountType": "[parameters('storageAccountType')]", + "subnetElastic": "[concat('elastic-subnet-',variables('resourceprefix'))]", + "subnetElasticPrefix": "[concat( variables('octets')[0], '.', variables('octets')[1], '.', string(add(int(variables('octets')[2]),4)))]", + "subnetElasticRange": "[concat( variables('octets')[0], '.', variables('octets')[1], '.', string(add(int(variables('octets')[2]),4)), '.0/24')]", + "subnetSan": "[concat('san-subnet-',variables('resourceprefix'))]", + "subnetSanPrefix": "[concat( variables('octets')[0], '.', variables('octets')[1], '.', string(add(int(variables('octets')[2]),1)))]", + "subnetSanRange": "[concat( variables('octets')[0], '.', variables('octets')[1], '.', string(add(int(variables('octets')[2]),1)), '.0/24')]", + "subnetWeb": "[concat('web-subnet-',variables('resourceprefix'))]", + "subnetWebPrefix": "[concat( variables('octets')[0], '.', variables('octets')[1], '.', string(add(int(variables('octets')[2]),0)))]", + "subnetWebRange": "[concat( variables('octets')[0], '.', variables('octets')[1], '.', string(add(int(variables('octets')[2]),0)), '.0/24')]", + "thumbprintSslCert": "[if(or(empty(parameters('keyVaultResourceId')), empty(parameters('sslCertThumbprint'))), 'None', parameters('sslCertThumbprint'))]", + "thumbprintCaCert": "[if(or(empty(parameters('keyVaultResourceId')), empty(parameters('caCertThumbprint'))), 'None', parameters('caCertThumbprint'))]", + "vNetAddressSpace": "[parameters('vNetAddressSpace')]", + "vaultName": "[concat('vault-',variables('resourceprefix'))]", + "vmssName": "[concat('vmss-',variables('resourceprefix'))]", + "vmssdStorageAccounttName": "[concat('vmss',uniqueString(resourceGroup().id))]", + "vnetGwDeploySwitch": "[parameters('vnetGwDeploySwitch')]", + "vnetName": "[concat('vnet-',variables('resourceprefix'))]", + "vpnType": "[parameters('vpnType')]", + "webServerSetupScriptFilename": "setup_webserver.sh", + "webServerType": "[parameters('webServerType')]" + }, + "certUrlArray": [ + { + "certificateUrl": "[parameters('sslCertKeyVaultURL')]" + }, + { + "certificateUrl": "[parameters('caCertKeyVaultURL')]" + } + ], + "ctlrVmSecretsArray": [ + { + "sourceVault": { + "id": "[parameters('keyVaultResourceId')]" + }, + "vaultCertificates": "[take(variables('certUrlArray'), if(empty(parameters('caCertKeyVaultURL')), 1, 2))]" + } + ], + "octets": "[split(parameters('vNetAddressSpace'), '.')]", + "resourceprefix": "[substring(uniqueString(resourceGroup().id, deployment().name), 3, 6)]" + } +} diff --git a/mahara-autoscale-cache/azuredeploy.parameters.json b/mahara-autoscale-cache/azuredeploy.parameters.json new file mode 100644 index 000000000000..00374607fa67 --- /dev/null +++ b/mahara-autoscale-cache/azuredeploy.parameters.json @@ -0,0 +1,9 @@ +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "sshPublicKey": { "value": "GEN-SSH-PUB-KEY" }, + "sshUsername": { "value": "GEN-UNIQUE-8"}, + "dbLogin": { "value": "GEN-UNIQUE-8"} + } +} diff --git a/mahara-autoscale-cache/images/stack_diagram.png b/mahara-autoscale-cache/images/stack_diagram.png new file mode 100644 index 000000000000..d9c22343bf25 Binary files /dev/null and b/mahara-autoscale-cache/images/stack_diagram.png differ diff --git a/mahara-autoscale-cache/metadata.json b/mahara-autoscale-cache/metadata.json new file mode 100644 index 000000000000..3cc05044755b --- /dev/null +++ b/mahara-autoscale-cache/metadata.json @@ -0,0 +1,8 @@ +{ + "$schema": "https://aka.ms/azure-quickstart-templates-metadata-schema#", + "itemDisplayName": "Autoscalable Mahara on Azure", + "description": "Deploys an autoscaling Mahara cluster with configurable Azure MySQL/Postgres and Elasticsearch. Can be configured for very small or very large sites. Deploys frontend components to a private network with a jumphost to access nodes. Requires keyed SSH access.", + "summary": "Mahara autoscale with db and elasticsearch", + "githubUsername": "darrin2016", + "dateUpdated": "2018-05-27" +} diff --git a/mahara-autoscale-cache/nested/controller.json b/mahara-autoscale-cache/nested/controller.json new file mode 100644 index 000000000000..7ec950f6f39f --- /dev/null +++ b/mahara-autoscale-cache/nested/controller.json @@ -0,0 +1,230 @@ +{ + "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "maharaCommon": { + "metadata": { + "description": "Common Mahara values" + }, + "type": "object" + }, + "ctlrPubIpId": { + "metadata": { + "description": "Resource ID of the controller VM public IP address" + }, + "type": "string" + }, + "siteFQDN": { + "metadata": { + "description": "FQDN of public IP address" + }, + "type": "string" + } + }, + + "resources": [ + { + "type": "Microsoft.Network/networkSecurityGroups", + "apiVersion": "2017-10-01", + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').ctlrNsgName]", + "properties": { + "securityRules": [ + { + "name": "Allow_SSH", + "properties": { + "access": "Allow", + "destinationAddressPrefix": "*", + "destinationPortRange": "22", + "direction": "Inbound", + "priority": 1000, + "protocol": "Tcp", + "sourceAddressPrefix": "*", + "sourcePortRange": "*" + } + }, + { + "name": "Allow_http", + "properties": { + "access": "Allow", + "destinationAddressPrefix": "*", + "destinationPortRange": "80", + "direction": "Inbound", + "priority": 1005, + "protocol": "Tcp", + "sourceAddressPrefix": "*", + "sourcePortRange": "*" + } + } + ] + }, + "tags": { + "displayName": "Controller NSG" + } + }, + { + "type": "Microsoft.Network/networkInterfaces", + "apiVersion": "2017-10-01", + "dependsOn": [ + "[concat('Microsoft.Network/networkSecurityGroups/', parameters('maharaCommon').ctlrNsgName)]" + ], + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').ctlrNicName]", + "properties": { + "networkSecurityGroup": { + "id": "[variables('nsgRef')]" + }, + "ipConfigurations": [ + { + "name": "ipcfgctlr", + "properties": { + "privateIPAllocationMethod": "Dynamic", + "publicIPAddress": { + "id": "[parameters('ctlrPubIpId')]" + }, + "subnet": { + "id": "[variables('subnetWebRef')]" + } + } + } + ] + }, + "tags": { + "displayName": "ctlrNic" + } + }, + { + "type": "Microsoft.Compute/virtualMachines", + "apiVersion": "2017-03-30", + "dependsOn": [ + "[concat('Microsoft.Network/networkInterfaces/', parameters('maharaCommon').ctlrNicName)]" + ], + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').ctlrVmName]", + "properties": { + "hardwareProfile": { + "vmSize": "[parameters('maharaCommon').controllerVmSku]" + }, + "networkProfile": { + "networkInterfaces": [ + { + "id": "[variables('nicRef')]" + } + ] + }, + "osProfile": { + "adminUsername": "[parameters('maharaCommon').sshUsername]", + "computerName": "[parameters('maharaCommon').ctlrVmName]", + "secrets": "[parameters('maharaCommon').ctlrVmSecrets]", + "linuxConfiguration": { + "disablePasswordAuthentication": true, + "ssh": { + "publicKeys": [ + { + "path": "[concat('/home/', parameters('maharaCommon').sshUsername, '/.ssh/authorized_keys')]", + "keyData": "[parameters('maharaCommon').sshPublicKey]" + } + ] + } + } + }, + "storageProfile": { + "imageReference": "[parameters('maharaCommon').osType]", + "osDisk": { + "createOption": "FromImage", + "managedDisk": { + "storageAccountType": "Standard_LRS" + }, + "name": "[parameters('maharaCommon').ctlrVmName]" + }, + "dataDisks": "[take(variables('nfsDiskArray'),if(equals(parameters('maharaCommon').fileServerType,'nfs'), parameters('maharaCommon').fileServerDiskCount, 0))]" + } + }, + "tags": { + "displayName": "Controller Virtual Machine" + } + }, + { + "condition": "[parameters('maharaCommon').applyScriptsSwitch]", + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "dependsOn": [ + "[concat('Microsoft.Compute/virtualMachines/', parameters('maharaCommon').ctlrVmName)]" + ], + "name": "[concat(parameters('maharaCommon').ctlrVmName,'-ScriptProcessor')]", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[parameters('maharaCommon')]" + }, + "siteFQDN": { + "value": "[parameters('siteFQDN')]" + } + }, + + "templateLink": { + "uri": "[concat(parameters('maharaCommon').baseTemplateUrl, 'controllerconfig.json', parameters('maharaCommon').artifactsSasToken)]" + } + } + }, + { + "condition": "[parameters('maharaCommon').azureBackupSwitch]", + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "dependsOn": [ + "[concat('Microsoft.Compute/virtualMachines/',parameters('maharaCommon').ctlrVmName)]" + ], + "name": "[concat(parameters('maharaCommon').ctlrVmName,'-Backup')]", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[parameters('maharaCommon')]" + }, + "vmName": { + "value": "[parameters('maharaCommon').ctlrVmName]" + } + }, + "templateLink": { + "uri": "[concat(parameters('maharaCommon').baseTemplateUrl,'recoveryservicesEnlist.json',parameters('maharaCommon').artifactsSasToken)]" + } + } + } + ], + "variables": { + "documentation01": "This sub-template drives the controller/jump-box which is used as the access-point for other mahara VM's ", + "documentation02": "It expects certain values in the 'common' datastructure.", + "documentation03": " vnetName - name of virtual network", + "documentation04": " subnetWeb - name of subnet for controller (and vm scale set)", + "documentation06": " ctlrPipName - name of Public IP address for the controller (note that none of the other VM's get a PIP - just the controller", + "documentation07": " ctlrNicName - name of the network interface (all VM's must hae a nic) to crate, tied to the public IP address", + "documentation08": " ctlrNsgName - name of the network security group, regulating access to/from the controller", + "documentation09": "This sub-template calls other sub-templates", + "documentation10": " controllerconfig - conditionally applies post-deployment script on the VM", + "documentation18": " recoveryservicesEnlist - conditionally enlists the VM into the backup regimen", + "nicRef": "[resourceId('Microsoft.Network/networkInterfaces', parameters('maharaCommon').ctlrNicName)]", + "nsgRef": "[resourceId('Microsoft.Network/networkSecurityGroups', parameters('maharaCommon').ctlrNsgName)]", + "subnetWebRef": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('maharaCommon').vnetName, parameters('maharaCommon').subnetWeb)]", + "copy": [ + { + "name": "nfsDiskArray", + "count": 8, + "input": { + "managedDisk": { + "storageAccountType": "Premium_LRS" + }, + "diskSizeGB": "[parameters('maharaCommon').fileServerDiskSize]", + "lun": "[copyIndex('nfsDiskArray')]", + "createOption": "Empty" + } + } + ] + }, + "outputs": { + "controllerIP": { + "value": "[reference(resourceId('Microsoft.Network/publicIPAddresses', parameters('maharaCommon').ctlrPipName), '2017-10-01').ipAddress]", + "type": "string" + } + } +} diff --git a/mahara-autoscale-cache/nested/controllerconfig.json b/mahara-autoscale-cache/nested/controllerconfig.json new file mode 100644 index 000000000000..95b0cc376f9d --- /dev/null +++ b/mahara-autoscale-cache/nested/controllerconfig.json @@ -0,0 +1,62 @@ +{ + "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "maharaCommon": { + "metadata": { + "description": "Common Mahara values" + }, + "type": "object" + }, + "siteFQDN": { + "metadata": { + "description": "FQDN of public IP address" + }, + "type": "string" + } + + }, + "resources": [ + { + "type": "Microsoft.Compute/virtualMachines/extensions", + "apiVersion": "2017-03-30", + "location": "[parameters('maharaCommon').location]", + "name": "[concat(parameters('maharaCommon').ctlrVmName,'/','install_mahara')]", + "properties": { + "autoUpgradeMinorVersion": true, + "publisher": "Microsoft.Azure.Extensions", + "settings": { + "fileUris": [ + "[variables('scriptUri')]", + "[parameters('maharaCommon').commonFunctionsScriptUri]" + ] + }, + "protectedSettings":{ + "commandToExecute": "[concat('bash ', parameters('maharaCommon').maharaInstallScriptFilename, ' ', parameters('maharaCommon').maharaVersion, ' ', concat(parameters('maharaCommon').gfsNameRoot, '0'), ' ', 'data', ' ', parameters('maharaCommon').siteURL, ' ', parameters('maharaCommon').httpsTermination, ' ', parameters('maharaCommon').dbDNS, ' ', parameters('maharaCommon').maharaDbName, ' ', parameters('maharaCommon').maharaDbUser, ' ', parameters('maharaCommon').maharaDbPass, ' ', parameters('maharaCommon').maharaAdminPass, ' ', concat(parameters('maharaCommon').dbLogin, '@', parameters('maharaCommon').dbServerType, '-', parameters('maharaCommon').resourcesPrefix), ' ', parameters('maharaCommon').dbLoginPassword, ' ', parameters('maharaCommon').storageAccountName, ' ', listKeys(variables('storageAccountId'), '2017-06-01').keys[0].value, ' ', parameters('maharaCommon').maharaDbUserAzure, ' ', parameters('maharaCommon').elasticVm1IP, ' ', parameters('maharaCommon').dbServerType, ' ', parameters('maharaCommon').fileServerType , ' ', parameters('maharaCommon').thumbprintSslCert, ' ', parameters('maharaCommon').thumbprintCaCert, ' ', parameters('maharaCommon').searchType, ' ' , parameters('siteFQDN'))]" + }, + "type": "CustomScript", + "typeHandlerVersion": "2.0" + }, + "tags": { + "displayName": "install_mahara" + } + } + ], + "variables": { + "documentation01": "This sub-template applies a specific post-deployment script to the controller vm", + "documentation02": "It expects certain values in the 'common' datastructure.", + "documentation03": " scriptLocation - web URI", + "documentation04": " maharaInstallScriptFilename - name of script file", + "documentation05": " siteURL - URL of the website", + "documentation06": " gfsNameRoot - nameroot of gluster farm - note that the code applies a 0 to get to the first node", + "documentation07": " ctlrVmName - name of the controller/jumpb ox VM", + "documentation08": " dbServerType - postgres or mysql", + "documentation09": " maharaDbName - database name for mahara", + "documentation10": " maharaDbUser - database user for mahara", + "documentation11": " maharaDbPass - database password for maharaDbUser", + "documentation12": " maharaAdminPass - password for mahara admin user", + + "scriptUri": "[concat(parameters('maharaCommon').scriptLocation,parameters('maharaCommon').maharaInstallScriptFilename,parameters('maharaCommon').artifactsSasToken)]", + "storageAccountId": "[resourceId('Microsoft.Storage/storageAccounts', parameters('maharaCommon').storageAccountName)]" + } +} diff --git a/mahara-autoscale-cache/nested/elastic-search.json b/mahara-autoscale-cache/nested/elastic-search.json new file mode 100644 index 000000000000..b970a7012dea --- /dev/null +++ b/mahara-autoscale-cache/nested/elastic-search.json @@ -0,0 +1,387 @@ +{ + "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "maharaCommon": { + "metadata": { + "description": "Common Mahara values" + }, + "type": "object" + } + }, + "resources": [ + { + "type": "Microsoft.Network/networkInterfaces", + "apiVersion": "2017-10-01", + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').elasticNicName1]", + "properties": { + "ipConfigurations": [ + { + "name": "ipcfg-elastic1", + "properties": { + "privateIPAllocationMethod": "Static", + "privateIPAddress": "[parameters('maharaCommon').elasticVm1IP]", + "subnet": { + "id": "[variables('subnetElasticRef')]" + } + } + } + ] + }, + "tags": { + "displayName": "Elastic NIC 1" + } + }, + { + "type": "Microsoft.Compute/virtualMachines", + "apiVersion": "2017-03-30", + "dependsOn": [ + "[concat('Microsoft.Network/networkInterfaces/', parameters('maharaCommon').elasticNicName1)]" + ], + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').elasticVmName1]", + "properties": { + "hardwareProfile": { + "vmSize": "[parameters('maharaCommon').elasticVmSku]" + }, + "networkProfile": { + "networkInterfaces": [ + { + "id": "[variables('nicRef1')]" + } + ] + }, + "osProfile": { + "adminUsername": "[parameters('maharaCommon').sshUsername]", + "computerName": "[parameters('maharaCommon').elasticVmName1]", + "linuxConfiguration": { + "disablePasswordAuthentication": true, + "ssh": { + "publicKeys": [ + { + "path": "[concat('/home/', parameters('maharaCommon').sshUsername, '/.ssh/authorized_keys')]", + "keyData": "[parameters('maharaCommon').sshPublicKey]" + } + ] + } + } + }, + "storageProfile": { + "dataDisks": [], + "imageReference": "[parameters('maharaCommon').osType]", + "osDisk": { + "createOption": "FromImage", + "managedDisk": { + "storageAccountType": "Standard_LRS" + }, + "name": "[parameters('maharaCommon').elasticVmName1]" + } + } + }, + "tags": { + "displayName": "Elastic Search Virtual Machine" + } + }, + { + "condition": "[parameters('maharaCommon').applyScriptsSwitch]", + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "dependsOn": [ + "[concat('Microsoft.Compute/virtualMachines/', parameters('maharaCommon').elasticVmName1)]" + ], + "name": "[concat(parameters('maharaCommon').elasticVmName1,'-ScriptProcessor')]", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[parameters('maharaCommon')]" + } + + }, + "templateLink": { + "uri": "[concat(parameters('maharaCommon').baseTemplateUrl,'elasticconfig.json',parameters('maharaCommon').artifactsSasToken)]" + } + } + }, + { + "condition": "[parameters('maharaCommon').azureBackupSwitch]", + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "dependsOn": [ + "[concat('Microsoft.Compute/virtualMachines/',parameters('maharaCommon').elasticVmName1)]" + ], + "name": "[concat(parameters('maharaCommon').elasticVmName1,'-Backup')]", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[parameters('maharaCommon')]" + }, + "vmName": { + "value": "[parameters('maharaCommon').elasticVmName1]" + } + }, + "templateLink": { + "uri": "[concat(parameters('maharaCommon').baseTemplateUrl,'recoveryservicesEnlist.json',parameters('maharaCommon').artifactsSasToken)]" + } + } + }, + { + "type": "Microsoft.Network/networkInterfaces", + "apiVersion": "2017-10-01", + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').elasticNicName2]", + "properties": { + "ipConfigurations": [ + { + "name": "ipcfg-elastic2", + "properties": { + "privateIPAllocationMethod": "Static", + "privateIPAddress": "[parameters('maharaCommon').elasticVm2IP]", + "subnet": { + "id": "[variables('subnetElasticRef')]" + } + } + } + ] + }, + "tags": { + "displayName": "Elastic NIC 2" + } + }, + { + "type": "Microsoft.Compute/virtualMachines", + "apiVersion": "2017-03-30", + "dependsOn": [ + "[concat('Microsoft.Network/networkInterfaces/', parameters('maharaCommon').elasticNicName2)]" + ], + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').elasticVmName2]", + "properties": { + "hardwareProfile": { + "vmSize": "[parameters('maharaCommon').elasticVmSku]" + }, + "networkProfile": { + "networkInterfaces": [ + { + "id": "[variables('nicRef2')]" + } + ] + }, + "osProfile": { + "adminUsername": "[parameters('maharaCommon').sshUsername]", + "computerName": "[parameters('maharaCommon').elasticVmName2]", + "linuxConfiguration": { + "disablePasswordAuthentication": true, + "ssh": { + "publicKeys": [ + { + "path": "[concat('/home/', parameters('maharaCommon').sshUsername, '/.ssh/authorized_keys')]", + "keyData": "[parameters('maharaCommon').sshPublicKey]" + } + ] + } + } + }, + "storageProfile": { + "dataDisks": [], + "imageReference": "[parameters('maharaCommon').osType]", + "osDisk": { + "createOption": "FromImage", + "managedDisk": { + "storageAccountType": "Standard_LRS" + }, + "name": "[parameters('maharaCommon').elasticVmName2]" + } + } + }, + "tags": { + "displayName": "Elastic Search Virtual Machine" + } + }, + { + "condition": "[parameters('maharaCommon').applyScriptsSwitch]", + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "dependsOn": [ + "[concat('Microsoft.Compute/virtualMachines/', parameters('maharaCommon').elasticVmName2)]" + ], + "name": "[concat(parameters('maharaCommon').elasticVmName2,'-ScriptProcessor')]", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[parameters('maharaCommon')]" + } + + }, + "templateLink": { + "uri": "[concat(parameters('maharaCommon').baseTemplateUrl,'elasticconfig.json',parameters('maharaCommon').artifactsSasToken)]" + } + } + }, + { + "condition": "[parameters('maharaCommon').azureBackupSwitch]", + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "dependsOn": [ + "[concat('Microsoft.Compute/virtualMachines/',parameters('maharaCommon').elasticVmName2)]" + ], + "name": "[concat(parameters('maharaCommon').elasticVmName2,'-Backup')]", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[parameters('maharaCommon')]" + }, + "vmName": { + "value": "[parameters('maharaCommon').elasticVmName2]" + } + }, + "templateLink": { + "uri": "[concat(parameters('maharaCommon').baseTemplateUrl,'recoveryservicesEnlist.json',parameters('maharaCommon').artifactsSasToken)]" + } + } + }, + { + "type": "Microsoft.Network/networkInterfaces", + "apiVersion": "2017-10-01", + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').elasticNicName3]", + "properties": { + "ipConfigurations": [ + { + "name": "ipcfg-elastic3", + "properties": { + "privateIPAllocationMethod": "Static", + "privateIPAddress": "[parameters('maharaCommon').elasticVm3IP]", + "subnet": { + "id": "[variables('subnetElasticRef')]" + } + } + } + ] + }, + "tags": { + "displayName": "Elastic NIC 2" + } + }, + { + "type": "Microsoft.Compute/virtualMachines", + "apiVersion": "2017-03-30", + "dependsOn": [ + "[concat('Microsoft.Network/networkInterfaces/', parameters('maharaCommon').elasticNicName3)]" + ], + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').elasticVmName3]", + "properties": { + "hardwareProfile": { + "vmSize": "[parameters('maharaCommon').elasticVmSku]" + }, + "networkProfile": { + "networkInterfaces": [ + { + "id": "[variables('nicRef3')]" + } + ] + }, + "osProfile": { + "adminUsername": "[parameters('maharaCommon').sshUsername]", + "computerName": "[parameters('maharaCommon').elasticVmName3]", + "linuxConfiguration": { + "disablePasswordAuthentication": true, + "ssh": { + "publicKeys": [ + { + "path": "[concat('/home/', parameters('maharaCommon').sshUsername, '/.ssh/authorized_keys')]", + "keyData": "[parameters('maharaCommon').sshPublicKey]" + } + ] + } + } + }, + "storageProfile": { + "dataDisks": [], + "imageReference": "[parameters('maharaCommon').osType]", + "osDisk": { + "createOption": "FromImage", + "managedDisk": { + "storageAccountType": "Standard_LRS" + }, + "name": "[parameters('maharaCommon').elasticVmName3]" + } + } + }, + "tags": { + "displayName": "Elastic Search Virtual Machine" + } + }, + { + "condition": "[parameters('maharaCommon').applyScriptsSwitch]", + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "dependsOn": [ + "[concat('Microsoft.Compute/virtualMachines/', parameters('maharaCommon').elasticVmName3)]" + ], + "name": "[concat(parameters('maharaCommon').elasticVmName3,'-ScriptProcessor')]", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[parameters('maharaCommon')]" + } + + }, + "templateLink": { + "uri": "[concat(parameters('maharaCommon').baseTemplateUrl,'elasticconfig.json',parameters('maharaCommon').artifactsSasToken)]" + } + } + }, + { + "condition": "[parameters('maharaCommon').azureBackupSwitch]", + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "dependsOn": [ + "[concat('Microsoft.Compute/virtualMachines/',parameters('maharaCommon').elasticVmName3)]" + ], + "name": "[concat(parameters('maharaCommon').elasticVmName3,'-Backup')]", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[parameters('maharaCommon')]" + }, + "vmName": { + "value": "[parameters('maharaCommon').elasticVmName3]" + } + }, + "templateLink": { + "uri": "[concat(parameters('maharaCommon').baseTemplateUrl,'recoveryservicesEnlist.json',parameters('maharaCommon').artifactsSasToken)]" + } + } + } + ], + "variables": { + "documentation01": "This sub-template drives the elastic which is used as the access-point for other mahara VM's ", + "documentation02": "It expects certain values in the 'common' datastructure.", + "documentation03": " vnetName - name of the virtual network", + "documentation04": " subnetElastic - name of subnet for elastic (and vm scale set)", + "documentation06": " elasticNicName1 - name of the eastlic vm 1 network interface", + "documentation07": " elasticNicName2 - name of the eastlic vm 2 network interface", + "documentation08": " elasticNicName3 - name of the eastlic vm 3 network interface", + "documentation09": " elasticVmName1 - name of the eastlic vm 1", + "documentation10": " elasticVmName2 - name of the eastlic vm 2", + "documentation11": " elasticVmName3 - name of the eastlic vm 3", + "documentation12": " elasticVm1IP - IP of the eastlic vm 1", + "documentation13": " elasticVm2IP - IP of the eastlic vm 2", + "documentation14": " elasticVm3IP - IP of the eastlic vm 3", + "documentation15": "This sub-template calls other sub-templates", + "documentation16": " elasticconfig - conditionally applies post-deployment script on the VM", + "documentation17": " recoveryservicesEnlist - conditionally enlists the VM into the backup regimen", + "nicRef1": "[resourceId('Microsoft.Network/networkInterfaces', parameters('maharaCommon').elasticNicName1)]", + "nicRef2": "[resourceId('Microsoft.Network/networkInterfaces', parameters('maharaCommon').elasticNicName2)]", + "nicRef3": "[resourceId('Microsoft.Network/networkInterfaces', parameters('maharaCommon').elasticNicName3)]", + "subnetElasticRef": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('maharaCommon').vnetName, parameters('maharaCommon').subnetElastic)]" + } +} diff --git a/mahara-autoscale-cache/nested/elasticconfig.json b/mahara-autoscale-cache/nested/elasticconfig.json new file mode 100644 index 000000000000..fab42c1e80eb --- /dev/null +++ b/mahara-autoscale-cache/nested/elasticconfig.json @@ -0,0 +1,92 @@ +{ + "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "maharaCommon": { + "metadata": { + "description": "Common Mahara values" + }, + "type": "object" + } + }, + "resources": [ + { + "type": "Microsoft.Compute/virtualMachines/extensions", + "apiVersion": "2017-03-30", + "location": "[parameters('maharaCommon').location]", + "name": "[concat(parameters('maharaCommon').elasticVmName1,'/','install_elastic')]", + "properties": { + "autoUpgradeMinorVersion": true, + "publisher": "Microsoft.Azure.Extensions", + "settings": { + "fileUris": [ + "[variables('scriptUri')]" + ] + }, + "protectedSettings":{ + "commandToExecute": "[variables('cmdExec')]" + }, + "type": "CustomScript", + "typeHandlerVersion": "2.0" + }, + "tags": { + "displayName": "install_elastic" + } + }, + { + "type": "Microsoft.Compute/virtualMachines/extensions", + "apiVersion": "2017-03-30", + "location": "[parameters('maharaCommon').location]", + "name": "[concat(parameters('maharaCommon').elasticVmName2,'/','install_elastic')]", + "properties": { + "autoUpgradeMinorVersion": true, + "publisher": "Microsoft.Azure.Extensions", + "settings": { + "fileUris": [ + "[variables('scriptUri')]" + ] + }, + "protectedSettings":{ + "commandToExecute": "[variables('cmdExec')]" + }, + "type": "CustomScript", + "typeHandlerVersion": "2.0" + }, + "tags": { + "displayName": "install_elastic" + } + }, + { + "type": "Microsoft.Compute/virtualMachines/extensions", + "apiVersion": "2017-03-30", + "location": "[parameters('maharaCommon').location]", + "name": "[concat(parameters('maharaCommon').elasticVmName3,'/','install_elastic')]", + "properties": { + "autoUpgradeMinorVersion": true, + "publisher": "Microsoft.Azure.Extensions", + "settings": { + "fileUris": [ + "[variables('scriptUri')]" + ] + }, + "protectedSettings":{ + "commandToExecute": "[variables('cmdExec')]" + }, + "type": "CustomScript", + "typeHandlerVersion": "2.0" + }, + "tags": { + "displayName": "install_elastic" + } + } + ], + "variables": { + "cmdExec": "[concat('bash ', parameters('maharaCommon').elasticScriptFilename, ' ', parameters('maharaCommon').elasticClusterName, ' ', parameters('maharaCommon').elasticVm1IP, ' ', parameters('maharaCommon').elasticVm2IP, ' ', parameters('maharaCommon').elasticVm3IP)]", + "documentation01": "This sub-template applies a specific post-deployment script to the controller vm", + "documentation02": "It expects certain values in the 'common' datastructure.", + "documentation03": " scriptLocation - web URI", + "documentation04": " elasticScriptFilename - name of script file", + "documentation05": " elasticVmName - name of the elastic search vm generic name", + "scriptUri": "[concat(parameters('maharaCommon').scriptLocation,parameters('maharaCommon').elasticScriptFilename,parameters('maharaCommon').artifactsSasToken)]" + } +} diff --git a/mahara-autoscale-cache/nested/gluster.json b/mahara-autoscale-cache/nested/gluster.json new file mode 100644 index 000000000000..7b6783f61d9e --- /dev/null +++ b/mahara-autoscale-cache/nested/gluster.json @@ -0,0 +1,64 @@ +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "maharaCommon": { + "metadata": { + "description": "Common Mahara values" + }, + "type": "object" + } + }, + "resources": [ + { + "type": "Microsoft.Compute/availabilitySets", + "apiVersion": "2017-03-30", + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').gfxAvailabilitySetName]", + "properties": { + "platformFaultDomainCount": 2, + "platformUpdateDomainCount": 5 + }, + "sku": { + "name": "Aligned" + }, + "tags": { + "displayName": "Gluster Availability Set" + } + }, + { + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "copy": { + "count": "[parameters('maharaCommon').glusterVmCount]", + "name": "vmloop" + }, + "dependsOn": [ + "[concat('Microsoft.Compute/availabilitySets/',parameters('maharaCommon').gfxAvailabilitySetName)]" + ], + "name": "[concat('glustervm',copyindex())]", + "properties": { + "mode": "Incremental", + "parameters": { + "counter": { + "value": "[copyindex()]" + }, + "maharaCommon": { + "value": "[parameters('maharaCommon')]" + } + }, + "templateLink": { + "uri": "[concat(parameters('maharaCommon').baseTemplateUrl,'glustervm.json',parameters('maharaCommon').artifactsSasToken)]" + } + } + } + ], + "variables": { + "documentation1": "This sub-template drives the gluster (scale-out network-attached storage file system) creation process.", + "documentation2": "It expects certain values in the 'common' datastructure.", + "documentation4": " gfxAvailabilitySetName - name of availability set for the gluster farm", + "documentation5": " glusterVmCount - number of nodes to create", + "documentation6": "This sub-template calls other sub-templates", + "documentation7": " glustervm - number of nodes in the gluster farm" + } +} diff --git a/mahara-autoscale-cache/nested/glustervm.json b/mahara-autoscale-cache/nested/glustervm.json new file mode 100644 index 000000000000..a3686a382352 --- /dev/null +++ b/mahara-autoscale-cache/nested/glustervm.json @@ -0,0 +1,178 @@ +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "counter": { + "metadata": { + "description": "from the copyindex function of calling template" + }, + "type": "int" + }, + "maharaCommon": { + "metadata": { + "description": "Common Mahara values" + }, + "type": "object" + } + }, + "resources": [ + { + "type": "Microsoft.Network/networkInterfaces", + "apiVersion": "2017-10-01", + "location": "[parameters('maharaCommon').location]", + "name": "[variables('nicName')]", + "properties": { + "ipConfigurations": [ + { + "name": "ipcfggfs", + "properties": { + "privateIPAllocationMethod": "Dynamic", + "subnet": { + "id": "[variables('subnetSanRef')]" + } + } + } + ] + }, + "tags": { + "displayName": "Gluster VM NIC" + } + }, + { + "type": "Microsoft.Compute/virtualMachines", + "apiVersion": "2017-03-30", + "dependsOn": [ + "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" + ], + "location": "[parameters('maharaCommon').location]", + "name": "[variables('vmName')]", + "properties": { + "availabilitySet": { + "id": "[variables('asRef')]" + }, + "hardwareProfile": { + "vmSize": "[parameters('maharaCommon').glusterVmSku]" + }, + "networkProfile": { + "networkInterfaces": [ + { + "id": "[variables('nicRef')]" + } + ] + }, + "osProfile": { + "adminUsername": "[parameters('maharaCommon').sshUsername]", + "computerName": "[variables('vmName')]", + "linuxConfiguration": { + "disablePasswordAuthentication": true, + "ssh": { + "publicKeys": [ + { + "path": "[concat('/home/', parameters('maharaCommon').sshUsername, '/.ssh/authorized_keys')]", + "keyData": "[parameters('maharaCommon').sshPublicKey]" + } + ] + } + } + }, + "storageProfile": { + "imageReference": "[parameters('maharaCommon').osType]", + "osDisk": { + "createOption": "FromImage", + "managedDisk": { + "storageAccountType": "Premium_LRS" + }, + "name": "[variables('vmName')]" + }, + "copy": [ + { + "name": "dataDisks", + "count": "[parameters('maharaCommon').fileServerDiskCount]", + "input": { + "managedDisk": { + "storageAccountType": "Premium_LRS" + }, + "diskSizeGB": "[parameters('maharaCommon').fileServerDiskSize]", + "lun": "[copyIndex('dataDisks')]", + "createOption": "Empty" + } + } + ] + } + }, + "tags": { + "displayName": "Gluster Virtual Machine" + } + }, + { + "condition": "[parameters('maharaCommon').applyScriptsSwitch]", + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "dependsOn": [ + "[concat('Microsoft.Compute/virtualMachines/',variables('vmName'))]" + ], + "name": "[concat(variables('vmName'),'-ScriptProcessor')]", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[parameters('maharaCommon')]" + }, + "vmName": { + "value": "[ variables('vmName')]" + }, + "vmNumber": { + "value": "[parameters('counter')]" + } + }, + "templateLink": { + "uri": "[concat(parameters('maharaCommon').baseTemplateUrl,'glustervmconfig.json',parameters('maharaCommon').artifactsSasToken)]" + } + } + }, + { + "condition": "[parameters('maharaCommon').azureBackupSwitch]", + "type": "Microsoft.Resources/deployments", + "apiVersion": "2016-09-01", + "dependsOn": [ + "[concat('Microsoft.Compute/virtualMachines/',variables('vmName'))]" + ], + "name": "[concat(variables('vmName'),'-Backup')]", + "properties": { + "mode": "Incremental", + "parameters": { + "maharaCommon": { + "value": "[parameters('maharaCommon')]" + }, + "vmName": { + "value": "[variables('vmName')]" + } + }, + "templateLink": { + "uri": "[concat(parameters('maharaCommon').baseTemplateUrl,'recoveryservicesEnlist.json',parameters('maharaCommon').artifactsSasToken)]" + } + } + } + ], + "variables": { + "asRef": "[resourceId('Microsoft.Compute/availabilitySets', parameters('maharaCommon').gfxAvailabilitySetName)]", + "documentation01": "This sub-template create the nodes of the gluster farm", + "documentation02": "It expects certain values in the 'common' datastructure.", + "documentation04": " gfxAvailabilitySetName - name of availability set for the gluster farm", + "documentation05": " vnetName - name of virtual network", + "documentation06": " subnetSan - name of subnet for gluster", + "documentation07": " gfsNameRoot - nameroot for the gluster nodes - combined with counter to get actual name of each node - disk and nic follow the naming scheme", + "documentation08": " glusterVmSku - VM instance size for gluster nodes", + "documentation09": " sshUsername - OS accountusername", + "documentation10": " osType - an array of value that specifies the type of VM", + "documentation15": "This sub-template calls other sub-templates", + "documentation17": " glustervmconfig - conditionally applies post-deployment script on the VM", + "documentation18": " recoveryservicesEnlist - conditionally enlists the VM into the backup regimen", + "documentation19": " fileServerDiskCount - Number of disks to raid0 for the gluster mount", + "documentation20": " fileServerDiskSize - Size per disk for gluster", + "nicName": "[concat(variables('vmName'),'-nic')]", + "nicRef": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]", + "subnetSanRef": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('maharaCommon').vnetName, parameters('maharaCommon').subnetSan)]", + "vmName": "[concat(parameters('maharaCommon').gfsNameRoot,parameters('counter'))]" + } +} diff --git a/mahara-autoscale-cache/nested/glustervmconfig.json b/mahara-autoscale-cache/nested/glustervmconfig.json new file mode 100644 index 000000000000..fc3cea94d19e --- /dev/null +++ b/mahara-autoscale-cache/nested/glustervmconfig.json @@ -0,0 +1,58 @@ +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "maharaCommon": { + "metadata": { + "description": "Common Mahara values" + }, + "type": "object" + }, + "vmName": { + "metadata": { + "description": "Name of VM to process script - not actually used" + }, + "type": "string" + }, + "vmNumber": { + "metadata": { + "description": "Number of the VM in the pool" + }, + "type": "int" + } + }, + "resources": [ + { + "type": "Microsoft.Compute/virtualMachines/extensions", + "apiVersion": "2017-03-30", + "location": "[parameters('maharaCommon').location]", + "name": "[concat(parameters('vmName'),'/','install_gluster')]", + "properties": { + "publisher": "Microsoft.Azure.Extensions", + "settings": { + "fileUris": [ + "[variables('scriptUri')]" + ] + }, + "protectedSettings":{ + "commandToExecute": "[variables('cmdExec')]" + }, + "type": "CustomScript", + "typeHandlerVersion": "2.0" + }, + "tags": { + "displayName": "GfsVmExtension" + } + } + ], + "variables": { + "cmdExec": "[concat('bash ', parameters('maharaCommon').glusterScriptFilename, ' ', parameters('maharaCommon').gfsNameRoot, ' ', parameters('maharaCommon').subnetSanPrefix, ' data ', parameters('vmNumber'), ' ', parameters('maharaCommon').glusterVmCount)]", + "documentation01": "This sub-template applies a specific post-deployment script to the gluster vms", + "documentation02": "It expects certain values in the 'common' datastructure.", + "documentation03": " scriptLocation - partial web URI (equivalent to folder)", + "documentation04": " glusterScriptFilename - name of script file", + "documentation06": " gfsNameRoot - nameroot of gluster farm - note that the code applies a vmNumber to get to the specific node", + "documentation07": " glusterVmCount - database (mariadb) password", + "scriptUri": "[concat(parameters('maharaCommon').scriptLocation,parameters('maharaCommon').glusterScriptFilename,parameters('maharaCommon').artifactsSasToken)]" + } +} diff --git a/mahara-autoscale-cache/nested/mysql.json b/mahara-autoscale-cache/nested/mysql.json new file mode 100644 index 000000000000..88210119895e --- /dev/null +++ b/mahara-autoscale-cache/nested/mysql.json @@ -0,0 +1,91 @@ +{ + "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "maharaCommon": { + "metadata": { + "description": "Common Mahara values" + }, + "type": "object" + }, + "lbPubIp": { + "metadata": { + "description": "Public IP address of the deployed load balancer" + }, + "type": "string" + }, + "ctlrPubIp": { + "metadata": { + "description": "Public IP address of the deployed controller VM" + }, + "type": "string" + } + }, + "resources": [ + { + "type": "Microsoft.DBforMySQL/servers", + "apiVersion": "2017-12-01", + "kind": "", + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').serverName]", + "properties": { + "administratorLogin": "[parameters('maharaCommon').dbLogin]", + "administratorLoginPassword": "[parameters('maharaCommon').dbLoginPassword]", + "sslEnforcement": "[parameters('maharaCommon').sslEnforcement]", + "storageProfile": { + "storageMB": "[mul(parameters('maharaCommon').mysqlPgresStgSizeGB, 1024)]", + "backupRetentionDays": "35", + "geoRedundantBackup": "Enabled" + }, + "version": "[parameters('maharaCommon').mysqlVersion]" + }, + "sku": { + "capacity": "[parameters('maharaCommon').mysqlPgresVcores]", + "name": "[parameters('maharaCommon').mysqlPgresSkuName]", + "tier": "[parameters('maharaCommon').mysqlPgresSkuTier]", + "family": "[parameters('maharaCommon').mysqlPgresSkuHwFamily]" + }, + "resources": [ + { + "apiVersion": "2017-12-01", + "dependsOn": [ + "[concat('Microsoft.DBforMySQL/servers/', parameters('maharaCommon').serverName)]" + ], + "location": "[parameters('maharaCommon').location]", + "name": "mysql-firewall-allow-lb", + "properties": { + "startIpAddress": "[parameters('lbPubIp')]", + "endIpAddress": "[parameters('lbPubIp')]" + }, + "type": "firewallRules" + }, + { + "apiVersion": "2017-12-01", + "dependsOn": [ + "[concat('Microsoft.DBforMySQL/servers/', parameters('maharaCommon').serverName)]" + ], + "location": "[parameters('maharaCommon').location]", + "name": "mysql-firewall-allow-ctlr", + "properties": { + "startIpAddress": "[parameters('ctlrPubIp')]", + "endIpAddress": "[parameters('ctlrPubIp')]" + }, + "type": "firewallRules" + } + ] + } + ], + "variables": { + "documentation1": "This sub-template creates a mysql server. It expects certain values in the 'common' datastructure.", + "documentation10": " serverName - Mysql server name", + "documentation11": " mysqlVersion - Mysql version", + "documentation2": " administratorLogin - mysql admin username", + "documentation3": " administratorLoginPassword - mysql admin password", + "documentation4": " location - Mysql server location", + "documentation5": " mysqlPgresVcores - Mysql database trasaction units", + "documentation7": " mysqlPgresSkuName - Mysql sku name", + "documentation8": " mysqlPgresStgSizeGB - Mysql sku size in mb", + "documentation9": " mysqlPgresSkuTier - Mysql sku tier", + "documentationA": " mysqlPgresSkuHwFamily - Mysql sku hardware family" + } +} diff --git a/mahara-autoscale-cache/nested/network.json b/mahara-autoscale-cache/nested/network.json new file mode 100644 index 000000000000..d9d382b0ed93 --- /dev/null +++ b/mahara-autoscale-cache/nested/network.json @@ -0,0 +1,259 @@ +{ + "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "maharaCommon": { + "metadata": { + "description": "Common Mahara values" + }, + "type": "object" + } + }, + "resources": [ + { + "type": "Microsoft.Network/virtualNetworks", + "apiVersion": "2017-10-01", + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').vnetName]", + "properties": { + "addressSpace": { + "addressPrefixes": [ + "[concat(parameters('maharaCommon').vNetAddressSpace,'/16')]" + ] + }, + "subnets": [ + { + "name": "[parameters('maharaCommon').subnetWeb]", + "properties": { + "addressPrefix": "[parameters('maharaCommon').subnetWebRange]" + } + }, + { + "name": "[parameters('maharaCommon').subnetSan]", + "properties": { + "addressPrefix": "[parameters('maharaCommon').subnetSanRange]" + } + }, + { + "name": "[parameters('maharaCommon').subnetElastic]", + "properties": { + "addressPrefix": "[parameters('maharaCommon').subnetElasticRange]" + } + }, + { + "name": "[parameters('maharaCommon').gatewaySubnet]", + "properties": { + "addressPrefix": "[parameters('maharaCommon').gatewaySubnetRange]" + } + } + ] + } + }, + { + "condition": "[parameters('maharaCommon').vnetGwDeploySwitch]", + "type": "Microsoft.Network/publicIPAddresses", + "apiVersion": "2017-10-01", + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').gatewayPublicIPName]", + "properties": { + "publicIPAllocationMethod": "Dynamic" + }, + "tags": { + "displayName": "Virtual network gateway Public IP" + } + }, + { + "condition": "[parameters('maharaCommon').vnetGwDeploySwitch]", + "type": "Microsoft.Network/virtualNetworks/subnets", + "apiVersion": "2017-10-01", + "dependsOn": [ + "[resourceId('Microsoft.Network/virtualNetworks', parameters('maharaCommon').vnetName)]" + ], + "name": "[concat(parameters('maharaCommon').vnetName, '/', parameters('maharaCommon').gatewaySubnet)]", + "properties": { + "addressPrefix": "[parameters('maharaCommon').gatewaySubnetRange]" + } + }, + { + "condition": "[parameters('maharaCommon').vnetGwDeploySwitch]", + "type": "Microsoft.Network/virtualNetworkGateways", + "apiVersion": "2017-10-01", + "dependsOn": [ + "[resourceId('Microsoft.Network/publicIPAddresses', parameters('maharaCommon').gatewayPublicIPName)]", + "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('maharaCommon').vnetName, parameters('maharaCommon').gatewaySubnet)]" + ], + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').gatewayName]", + "properties": { + "activeActive": false, + "enableBgp": false, + "gatewayType": "[parameters('maharaCommon').gatewayType]", + "ipConfigurations": [ + { + "name": "vnet-Gateway-Config", + "properties": { + "privateIPAllocationMethod": "Dynamic", + "publicIPAddress": { + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('maharaCommon').gatewayPublicIPName)]" + }, + "subnet": { + "id": "[variables('gatewaySubnetRef')]" + } + } + } + ], + "sku": { + "name": "VpnGw1", + "tier": "VpnGw1", + "capacity": 2 + }, + "vpnType": "[parameters('maharaCommon').vpnType]" + } + }, + { + "type": "Microsoft.Network/publicIPAddresses", + "apiVersion": "2017-10-01", + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').lbPipName]", + "properties": { + "dnsSettings": { + "domainNameLabel": "[parameters('maharaCommon').lbName]" + }, + "publicIPAllocationMethod": "Static" + }, + "tags": { + "displayName": "Load Balancer Public IP" + } + }, + { + "type": "Microsoft.Network/publicIPAddresses", + "apiVersion": "2017-10-01", + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').ctlrPipName]", + "properties": { + "dnsSettings": { + "domainNameLabel": "[parameters('maharaCommon').ctlrPipName]" + }, + "publicIPAllocationMethod": "Static" + }, + "tags": { + "displayName": "Controller VM Public IP" + } + }, + { + "type": "Microsoft.Network/loadBalancers", + "apiVersion": "2017-10-01", + "dependsOn": [ + "[concat('Microsoft.Network/publicIPAddresses/',parameters('maharaCommon').lbPipName)]" + ], + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').lbName]", + "properties": { + "backendAddressPools": [ + { + "name": "[parameters('maharaCommon').extBeName ]" + } + ], + "frontendIPConfigurations": [ + { + "name": "[parameters('maharaCommon').extFeName ]", + "properties": { + "publicIPAddress": { + "id": "[variables('lbPipID')]" + } + } + } + ], + "loadBalancingRules": [ + { + "name": "Http", + "properties": { + "backendAddressPool": { + "id": "[variables('extBeID')]" + }, + "backendPort": 80, + "enableFloatingIP": false, + "frontendIPConfiguration": { + "id": "[variables('extFeID')]" + }, + "frontendPort": 80, + "idleTimeoutInMinutes": 5, + "probe": { + "id": "[variables('extProbeID')]" + }, + "protocol": "Tcp" + } + }, + { + "name": "Https", + "properties": { + "backendAddressPool": { + "id": "[variables('extBeID')]" + }, + "backendPort": 443, + "enableFloatingIP": false, + "frontendIPConfiguration": { + "id": "[variables('extFeID')]" + }, + "frontendPort": 443, + "idleTimeoutInMinutes": 5, + "probe": { + "id": "[variables('extProbeID')]" + }, + "protocol": "Tcp" + } + } + ], + "probes": [ + { + "name": "[parameters('maharaCommon').extProbe ]", + "properties": { + "intervalInSeconds": 5, + "numberOfProbes": 3, + "port": 80, + "protocol": "Tcp" + } + } + ] + } + } + ], + "variables": { + "documentation01": "This sub-template creates a virtual network with three subnets and then creates the mahara load-balancer with public IP/dns", + "documentation02": "It expects certain values in the 'common' datastructure.", + "documentation03": " vnetName - name of virtual network", + "documentation04": " vNetAddressSpace - base of address of 16 bit address range", + "documentation05": " subnetWeb - name of subnet inside virtual network - will be assigned the .0.0/24 range", + "documentation06": " subnetSan - name of subnet inside virtual network - will be assigned the .1.0/24 range", + "documentation07": " subnetElastic - name of subnet inside virtual network - will be assigned the .4.0/24 range", + "documentation08": " gatewaySubnet - name of subnet inside virtual network - will be assigned the .2.0/24 range", + "documentation09": " lbPipName - name of public IP", + "documentation10": " lbName - name of Mahara load balancer", + "extBeID": "[concat(variables('extLbID'),'/backendAddressPools/',parameters('maharaCommon').extBeName)]", + "extFeID": "[concat(variables('extLbID'),'/frontendIPConfigurations/',parameters('maharaCommon').extFeName)]", + "extLbID": "[resourceId('Microsoft.Network/loadBalancers',parameters('maharaCommon').lbName)]", + "extProbeID": "[concat(variables('extLbID'),'/probes/',parameters('maharaCommon').extProbe)]", + "gatewaySubnetRef": "[concat(resourceId('Microsoft.Network/virtualNetworks', parameters('maharaCommon').vnetName),'/subnets/',parameters('maharaCommon').gatewaySubnet)]", + "lbPipID": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('maharaCommon').lbPipName)]", + "ctlrPipID": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('maharaCommon').ctlrPipName)]" + }, + "outputs": { + "lbPubIp": { + "value": "[reference(parameters('maharaCommon').lbPipName, '2017-10-01').ipAddress]", + "type": "string" + }, + "ctlrPubIp": { + "value": "[reference(parameters('maharaCommon').ctlrPipName, '2017-10-01').ipAddress]", + "type": "string" + }, + "ctlrPubIpId": { + "value": "[variables('ctlrPipID')]", + "type": "string" + }, + "siteFQDN": { + "value": "[reference(variables('lbPipID'), '2017-04-01').dnsSettings.fqdn]", + "type": "string" + } + + } +} diff --git a/mahara-autoscale-cache/nested/none-search.json b/mahara-autoscale-cache/nested/none-search.json new file mode 100644 index 000000000000..a5ca0772eb43 --- /dev/null +++ b/mahara-autoscale-cache/nested/none-search.json @@ -0,0 +1,17 @@ +{ + "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "maharaCommon": { + "metadata": { + "description": "Common Mahara values" + }, + "type": "object" + } + }, + "resources": [ + ], + "variables": { + "documentation01": "This sub-template that represents no mahara global search is activated." + } +} diff --git a/mahara-autoscale-cache/nested/postgres.json b/mahara-autoscale-cache/nested/postgres.json new file mode 100644 index 000000000000..fa2801060821 --- /dev/null +++ b/mahara-autoscale-cache/nested/postgres.json @@ -0,0 +1,91 @@ +{ + "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "maharaCommon": { + "metadata": { + "description": "Common Mahara values" + }, + "type": "object" + }, + "lbPubIp": { + "metadata": { + "description": "Public IP address of the deployed load balancer" + }, + "type": "string" + }, + "ctlrPubIp": { + "metadata": { + "description": "Public IP address of the deployed controller VM" + }, + "type": "string" + } + }, + "resources": [ + { + "type": "Microsoft.DBforPostgreSQL/servers", + "apiVersion": "2017-12-01", + "kind": "", + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').serverName]", + "properties": { + "administratorLogin": "[parameters('maharaCommon').dbLogin]", + "administratorLoginPassword": "[parameters('maharaCommon').dbLoginPassword]", + "sslEnforcement": "[parameters('maharaCommon').sslEnforcement]", + "storageProfile": { + "storageMB": "[mul(parameters('maharaCommon').mysqlPgresStgSizeGB, 1024)]", + "backupRetentionDays": "35", + "geoRedundantBackup": "Enabled" + }, + "version": "[parameters('maharaCommon').postgresVersion]" + }, + "sku": { + "capacity": "[parameters('maharaCommon').mysqlPgresVcores]", + "name": "[parameters('maharaCommon').mysqlPgresSkuName]", + "tier": "[parameters('maharaCommon').mysqlPgresSkuTier]", + "family": "[parameters('maharaCommon').mysqlPgresSkuHwFamily]" + }, + "resources": [ + { + "apiVersion": "2017-12-01", + "dependsOn": [ + "[concat('Microsoft.DBforPostgreSQL/servers/', parameters('maharaCommon').serverName)]" + ], + "location": "[parameters('maharaCommon').location]", + "name": "postgres-firewall-allow-lb", + "properties": { + "startIpAddress": "[parameters('lbPubIp')]", + "endIpAddress": "[parameters('lbPubIp')]" + }, + "type": "firewallRules" + }, + { + "apiVersion": "2017-12-01", + "dependsOn": [ + "[concat('Microsoft.DBforPostgreSQL/servers/', parameters('maharaCommon').serverName)]" + ], + "location": "[parameters('maharaCommon').location]", + "name": "postgres-firewall-allow-ctlr", + "properties": { + "startIpAddress": "[parameters('ctlrPubIp')]", + "endIpAddress": "[parameters('ctlrPubIp')]" + }, + "type": "firewallRules" + } + ] + } + ], + "variables": { + "documentation1": "This sub-template creates a postgresql server. It expects certain values in the 'common' datastructure.", + "documentation10": " serverName - Postgresql server name", + "documentation11": " postgresVersion - Postgresql version", + "documentation2": " administratorLogin - postgresql admin username", + "documentation3": " administratorLoginPassword - postgresql admin password", + "documentation4": " location - Postgresql server location", + "documentation5": " mysqlPgresVcores - Postgresql database trasaction units", + "documentation7": " mysqlPgresSkuName - Postgresql sku name", + "documentation8": " mysqlPgresStgSizeGB - Postgresql sku size in mb", + "documentation9": " mysqlPgresSkuTier - Postgresql sku tier", + "documentationA": " mysqlPgresSkuHwFamily - Mysql sku hardware family" + } +} diff --git a/mahara-autoscale-cache/nested/recoveryservices.json b/mahara-autoscale-cache/nested/recoveryservices.json new file mode 100644 index 000000000000..2ae046ea45c5 --- /dev/null +++ b/mahara-autoscale-cache/nested/recoveryservices.json @@ -0,0 +1,97 @@ +{ + "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "maharaCommon": { + "metadata": { + "description": "Common Mahara values" + }, + "type": "object" + } + }, + "resources": [ + { + "type": "Microsoft.RecoveryServices/vaults", + "apiVersion": "2016-06-01", + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').vaultName]", + "properties": {}, + "sku": { + "name": "RS0", + "tier": "Standard" + } + }, + { + "type": "Microsoft.RecoveryServices/vaults/backupPolicies", + "apiVersion": "2017-07-01", + "dependsOn": [ + "[concat('Microsoft.RecoveryServices/vaults/', parameters('maharaCommon').vaultName)]" + ], + "location": "[parameters('maharaCommon').location]", + "name": "[concat(parameters('maharaCommon').vaultName, '/', parameters('maharaCommon').policyName)]", + "properties": { + "backupManagementType": "AzureIaasVM", + "retentionPolicy": { + "dailySchedule": { + "retentionDuration": { + "count": "[variables( 'dailyRetentionDurationCount')]", + "durationType": "Days" + }, + "retentionTimes": "[variables('scheduleRunTimes')]" + }, + "monthlySchedule": { + "retentionDuration": { + "count": "[variables('monthlyRetentionDurationCount')]", + "durationType": "Months" + }, + "retentionScheduleDaily": { + "daysOfTheMonth": [ + { + "date": 1, + "isLast": false + } + ] + }, + "retentionScheduleFormatType": "Daily", + "retentionScheduleWeekly": null, + "retentionTimes": "[variables('scheduleRunTimes')]" + }, + "retentionPolicyType": "LongTermRetentionPolicy", + "weeklySchedule": { + "daysOfTheWeek": "[variables('daysOfTheWeek')]", + "retentionDuration": { + "count": "[variables( 'weeklyRetentionDurationCount')]", + "durationType": "Weeks" + }, + "retentionTimes": "[variables('scheduleRunTimes')]" + } + }, + "schedulePolicy": { + "schedulePolicyType": "SimpleSchedulePolicy", + "scheduleRunDays": null, + "scheduleRunFrequency": "Daily", + "scheduleRunTimes": "[variables('scheduleRunTimes')]" + } + } + } + ], + "variables": { + "dailyRetentionDurationCount": 7, + "daysOfTheWeek": [ + "Sunday" + ], + "documentation1": "This sub-template creates a recovery services vault. It expects certain values in the 'common' datastructure.", + "documentation2": " vaultName - name of virtual network", + "documentation3": " policyName - name of backup policy inside vault", + "documentation4": "", + "documentation5": "The policy will create a daily backup with the following retentions", + "documentation6": " Daily - keep last 7 daily", + "documentation7": " Weekly - keep last 4 Sundays", + "documentation8": " Monthly - keep last 6 1st-of-the-month", + "monthlyRetentionDurationCount": 6, + "scheduleRunTimes": [ + "2017-01-01T22:30:00Z" + ], + "weeklyRetentionDurationCount": 4 + } +} diff --git a/mahara-autoscale-cache/nested/recoveryservicesEnlist.json b/mahara-autoscale-cache/nested/recoveryservicesEnlist.json new file mode 100644 index 000000000000..f974035952e5 --- /dev/null +++ b/mahara-autoscale-cache/nested/recoveryservicesEnlist.json @@ -0,0 +1,41 @@ +{ + "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "maharaCommon": { + "metadata": { + "description": "Common Mahara values" + }, + "type": "object" + }, + "vmName": { + "metadata": { + "description": "Name of VM to enlist in AzureBackup" + }, + "type": "string" + } + }, + "resources": [ + { + "type": "Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers/protectedItems", + "apiVersion": "2016-06-01", + "location": "[parameters('maharaCommon').location]", + "name": "[concat(parameters('maharaCommon').vaultName, '/', variables('backupFabric'), '/', variables('v2VmContainer'), concat(resourceGroup().name,';',parameters('vmName')), '/', variables('v2Vm'), concat(resourceGroup().name,';',parameters('vmName')))]", + "properties": { + "policyId": "[resourceId('Microsoft.RecoveryServices/vaults/backupPolicies',parameters('maharaCommon').vaultName,parameters('maharaCommon').policyName )]", + "protectedItemType": "[variables('v2VmType')]", + "sourceResourceId": "[resourceId(subscription().subscriptionId,resourceGroup().name,'Microsoft.Compute/virtualMachines',parameters('vmName'))]" + } + } + ], + "variables": { + "backupFabric": "Azure", + "documentation1": "This sub-template adds a VM to the recovery services vault. It expects certain values in the 'common' datastructure.", + "documentation2": " vaultName - name of virtual network", + "documentation3": " policyName - name of backup policy inside vault", + "documentation4": "", + "v2Vm": "vm;iaasvmcontainerv2;", + "v2VmContainer": "iaasvmcontainer;iaasvmcontainerv2;", + "v2VmType": "Microsoft.Compute/virtualMachines" + } +} diff --git a/mahara-autoscale-cache/nested/storageAccount.json b/mahara-autoscale-cache/nested/storageAccount.json new file mode 100644 index 000000000000..43a4bffd891f --- /dev/null +++ b/mahara-autoscale-cache/nested/storageAccount.json @@ -0,0 +1,49 @@ +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "maharaCommon": { + "metadata": { + "description": "Common Mahara values" + }, + "type": "object" + } + }, + "resources": [ + { + "type": "Microsoft.Storage/storageAccounts", + "apiVersion": "2017-06-01", + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').storageAccountName]", + "kind": "Storage", + "sku": { + "name": "[parameters('maharaCommon').storageAccountType]" + }, + "properties": { + "encryption": { + "keySource": "Microsoft.Storage", + "services": { + "blob": { + "enabled": true + }, + "file": { + "enabled": true + } + } + }, + "networkAcls": { + "bypass": "AzureServices", + "defaultAction": "Allow", + "ipRules": [], + "virtualNetworkRules": [] + }, + "supportsHttpsTrafficOnly": true + } + } + ], + "variables": { + "documentation1": "This sub-template creates a storage account. It expects certain values in the 'common' datastructure.", + "documentation2": " storageAccountName - name of storage account", + "documentation3": " storageAccountType - type of storage account" + } +} diff --git a/mahara-autoscale-cache/nested/webvmss.json b/mahara-autoscale-cache/nested/webvmss.json new file mode 100644 index 000000000000..2e2642405a30 --- /dev/null +++ b/mahara-autoscale-cache/nested/webvmss.json @@ -0,0 +1,210 @@ +{ + "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "maharaCommon": { + "metadata": { + "description": "Common Mahara values" + }, + "type": "object" + }, + "siteFQDN": { + "metadata": { + "description": "FQDN of public IP address" + }, + "type": "string" + } + }, + "resources": [ + { + "type": "Microsoft.Storage/storageAccounts", + "apiVersion": "2017-06-01", + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').vmssdStorageAccounttName]", + "kind": "Storage", + "sku": { + "name": "Standard_LRS" + } + }, + { + "type": "Microsoft.Compute/virtualMachineScaleSets", + "apiVersion": "2017-03-30", + "dependsOn": [ + "[concat('Microsoft.Storage/storageAccounts/', parameters('maharaCommon').vmssdStorageAccounttName)]" + ], + "location": "[parameters('maharaCommon').location]", + "name": "[parameters('maharaCommon').vmssName]", + "properties": { + "overprovision": true, + "upgradePolicy": { + "mode": "Manual" + }, + "virtualMachineProfile": { + "extensionProfile": { + "extensions": [ + { + "name": "setup_mahara", + "properties": { + "autoUpgradeMinorVersion": true, + "publisher": "Microsoft.Azure.Extensions", + "settings": { + "fileUris": [ + "[variables('scriptUri')]", + "[parameters('maharaCommon').commonFunctionsScriptUri]" + ] + }, + "protectedSettings":{ + "commandToExecute": "[concat('bash ', parameters('maharaCommon').webServerSetupScriptFilename, ' ', concat(parameters('maharaCommon').gfsNameRoot, '0'), ' data ', parameters('maharaCommon').siteURL, ' ', parameters('maharaCommon').httpsTermination, ' ', concat('controller-vm-',parameters('maharaCommon').resourcesPrefix), ' ', parameters('maharaCommon').webServerType, ' ', parameters('maharaCommon').fileServerType, ' ', parameters('maharaCommon').storageAccountName, ' ', listKeys(variables('storageAccountId'), '2017-06-01').keys[0].value, ' ', parameters('maharaCommon').ctlrVmName, ' ', parameters('maharaCommon').htmlLocalCopySwitch, ' ' , parameters('siteFQDN'))]" + }, + "type": "CustomScript", + "typeHandlerVersion": "2.0" + } + } + ] + }, + "networkProfile": { + "networkInterfaceConfigurations": [ + { + "name": "vmssnic", + "properties": { + "ipConfigurations": [ + { + "name": "ipcfg_lb", + "properties": { + "loadBalancerBackendAddressPools": [ + { + "id": "[variables('extBeID')]" + } + ], + "subnet": { + "id": "[variables('subnetWebRef')]" + } + } + } + ], + "primary": true + } + } + ] + }, + "osProfile": { + "adminUsername": "[parameters('maharaCommon').sshUsername]", + "computerNamePrefix": "[parameters('maharaCommon').vmssName]", + "linuxConfiguration": { + "disablePasswordAuthentication": true, + "ssh": { + "publicKeys": [ + { + "path": "[concat('/home/', parameters('maharaCommon').sshUsername, '/.ssh/authorized_keys')]", + "keyData": "[parameters('maharaCommon').sshPublicKey]" + } + ] + } + } + }, + "storageProfile": { + "imageReference": "[parameters('maharaCommon').osType]", + "osDisk": { + "caching": "ReadOnly", + "createOption": "FromImage", + "managedDisk": { + "storageAccountType": "Standard_LRS" + } + } + } + } + }, + "sku": { + "capacity": 1, + "name": "[parameters('maharaCommon').autoscaleVmSku]", + "tier": "Standard" + }, + "tags": { + "displayName": "webfarm" + } + }, + { + "type": "Microsoft.Insights/autoscaleSettings", + "apiVersion": "2015-04-01", + "dependsOn": [ + "[concat('Microsoft.Compute/virtualMachineScaleSets/', parameters('maharaCommon').vmssName)]" + ], + "location": "[parameters('maharaCommon').location]", + "name": "autoscalewad", + "properties": { + "enabled": true, + "name": "autoscalewad", + "profiles": [ + { + "capacity": { + "default": "1", + "maximum": "[parameters('maharaCommon').autoscaleVmCount]", + "minimum": "1" + }, + "name": "Profile1", + "rules": [ + { + "metricTrigger": { + "metricName": "Percentage CPU", + "metricNamespace": "", + "metricResourceUri": "[variables('vmssID')]", + "operator": "GreaterThan", + "statistic": "Average", + "threshold": 60, + "timeAggregation": "Average", + "timeGrain": "PT1M", + "timeWindow": "PT5M" + }, + "scaleAction": { + "cooldown": "PT10M", + "direction": "Increase", + "type": "ChangeCount", + "value": "1" + } + }, + { + "metricTrigger": { + "metricName": "Percentage CPU", + "metricNamespace": "", + "metricResourceUri": "[variables('vmssID')]", + "operator": "LessThan", + "statistic": "Average", + "threshold": 30, + "timeAggregation": "Average", + "timeGrain": "PT1M", + "timeWindow": "PT5M" + }, + "scaleAction": { + "cooldown": "PT10M", + "direction": "Decrease", + "type": "ChangeCount", + "value": "1" + } + } + ] + } + ], + "targetResourceUri": "[variables('vmssID')]" + } + } + ], + "variables": { + "dstorID": "[resourceId('Microsoft.Storage/storageAccounts',parameters('maharaCommon').vmssdStorageAccounttName)]", + "extBeID": "[concat(variables('extLbID'),'/backendAddressPools/',parameters('maharaCommon').extBeName)]", + "extFeID": "[concat(variables('extLbID'),'/frontendIPConfigurations/',parameters('maharaCommon').extFeName)]", + "extLbID": "[resourceId('Microsoft.Network/loadBalancers',parameters('maharaCommon').lbName)]", + "extProbeID": "[concat(variables('extLbID'),'/probes/',parameters('maharaCommon').extProbe )]", + "pipID": "[resourceId('Microsoft.Network/publicIPAddresses',parameters('maharaCommon').lbPipName)]", + "scriptUri": "[concat(parameters('maharaCommon').scriptLocation,parameters('maharaCommon').webServerSetupScriptFilename,parameters('maharaCommon').artifactsSasToken)]", + "subnetWebRef": "[concat(resourceId('Microsoft.Network/virtualNetworks',parameters('maharaCommon').vnetName),'/subnets/',parameters('maharaCommon').subnetWeb)]", + "storageAccountId": "[resourceId('Microsoft.Storage/storageAccounts', parameters('maharaCommon').storageAccountName)]", + "vmssID": "[resourceId('Microsoft.Compute/virtualMachineScaleSets',parameters('maharaCommon').vmssName)]", + "webvmss1NIC": "[concat('Microsoft.Compute/virtualMachineScaleSets/', parameters('maharaCommon').vmssName, '/virtualMachines/0/networkInterfaces/vmssnic')]" + }, + "outputs": { + "webvm1IP": { + "value": "[reference(variables('webvmss1NIC'), '2017-03-30').ipConfigurations[0].properties.privateIPAddress]", + "type": "string" + } + } +} diff --git a/mahara-autoscale-cache/scripts/helper_functions.sh b/mahara-autoscale-cache/scripts/helper_functions.sh new file mode 100644 index 000000000000..b364051ffd8b --- /dev/null +++ b/mahara-autoscale-cache/scripts/helper_functions.sh @@ -0,0 +1,765 @@ +#!/bin/bash + +# Common functions definitions + +function check_fileServerType_param +{ + local fileServerType=$1 + if [ "$fileServerType" != "gluster" -a "$fileServerType" != "azurefiles" -a "$fileServerType" != "nfs" ]; then + echo "Invalid fileServerType ($fileServerType) given. Only 'gluster', 'azurefiles' or 'nfs' are allowed. Exiting" + exit 1 + fi +} + +function create_azure_files_mahara_share +{ + local storageAccountName=$1 + local storageAccountKey=$2 + local logFilePath=$3 + + az storage share create \ + --name mahara \ + --account-name $storageAccountName \ + --account-key $storageAccountKey \ + --fail-on-exist >> $logFilePath +} + +function setup_and_mount_azure_files_mahara_share +{ + local storageAccountName=$1 + local storageAccountKey=$2 + + cat < /etc/mahara_azure_files.credential +username=$storageAccountName +password=$storageAccountKey +EOF + chmod 600 /etc/mahara_azure_files.credential + + grep "^//$storageAccountName.file.core.windows.net/mahara\s\s*/mahara\s\s*cifs" /etc/fstab + if [ $? != "0" ]; then + echo -e "\n//$storageAccountName.file.core.windows.net/mahara /mahara cifs credentials=/etc/mahara_azure_files.credential,uid=www-data,gid=www-data,nofail,vers=3.0,dir_mode=0770,file_mode=0660,serverino,mfsymlinks" >> /etc/fstab + fi + mkdir -p /mahara + mount /mahara +} + +# Functions for making NFS share available +# TODO refactor these functions with the same ones in install_gluster.sh +function scan_for_new_disks +{ + local BLACKLIST=${1} # E.g., /dev/sda|/dev/sdb + declare -a RET + local DEVS=$(ls -1 /dev/sd*|egrep -v "${BLACKLIST}"|egrep -v "[0-9]$") + for DEV in ${DEVS}; + do + # Check each device if there is a "1" partition. If not, + # "assume" it is not partitioned. + if [ ! -b ${DEV}1 ]; + then + RET+="${DEV} " + fi + done + echo "${RET}" +} + +function create_raid0_ubuntu { + local RAIDDISK=${1} # E.g., /dev/md1 + local RAIDCHUNKSIZE=${2} # E.g., 128 + local DISKCOUNT=${3} # E.g., 4 + shift + shift + shift + local DISKS="$@" + + dpkg -s mdadm + if [ ${?} -eq 1 ]; + then + echo "installing mdadm" + sudo apt-get -y -q install mdadm + fi + echo "Creating raid0" + udevadm control --stop-exec-queue + echo "yes" | mdadm --create $RAIDDISK --name=data --level=0 --chunk=$RAIDCHUNKSIZE --raid-devices=$DISKCOUNT $DISKS + udevadm control --start-exec-queue + mdadm --detail --verbose --scan > /etc/mdadm/mdadm.conf +} + +function do_partition { + # This function creates one (1) primary partition on the + # disk device, using all available space + local DISK=${1} # E.g., /dev/sdc + + echo "Partitioning disk $DISK" + echo -ne "n\np\n1\n\n\nw\n" | fdisk "${DISK}" + #> /dev/null 2>&1 + + # + # Use the bash-specific $PIPESTATUS to ensure we get the correct exit code + # from fdisk and not from echo + if [ ${PIPESTATUS[1]} -ne 0 ]; + then + echo "An error occurred partitioning ${DISK}" >&2 + echo "I cannot continue" >&2 + exit 2 + fi +} + +function add_local_filesystem_to_fstab { + local UUID=${1} + local MOUNTPOINT=${2} # E.g., /mahara + + grep "${UUID}" /etc/fstab >/dev/null 2>&1 + if [ ${?} -eq 0 ]; + then + echo "Not adding ${UUID} to fstab again (it's already there!)" + else + LINE="\nUUID=${UUID} ${MOUNTPOINT} ext4 defaults,noatime 0 0" + echo -e "${LINE}" >> /etc/fstab + fi +} + +function create_filesystem_with_raid { + local MOUNTPOINT=${1} # E.g., /mahara + local RAIDDISK=${2} # E.g., /dev/md1 + local RAIDPARTITION=${3} # E.g., /dev/md1p1 + + mkdir -p $MOUNTPOINT + + local DISKS=$(scan_for_new_disks "/dev/sda|/dev/sdb") + echo "Disks are ${DISKS}" + declare -i DISKCOUNT + local DISKCOUNT=$(echo "$DISKS" | wc -w) + echo "Disk count is $DISKCOUNT" + if [ $DISKCOUNT = "0" ]; + then + echo "No new (unpartitioned) disks available... Returning..." + return + elif [ $DISKCOUNT -gt 1 ]; + then + create_raid0_ubuntu /dev/md1 128 $DISKCOUNT $DISKS + do_partition ${RAIDDISK} + local PARTITION="${RAIDPARTITION}" + else + do_partition ${DISKS} + local PARTITION=$(fdisk -l ${DISKS}|grep -A 1 Device|tail -n 1|awk '{print $1}') + fi + + echo "Creating filesystem on ${PARTITION}." + mkfs -t ext4 ${PARTITION} + mkdir -p "${MOUNTPOINT}" + local UUID=$(blkid -u filesystem ${PARTITION}|awk -F "[= ]" '{print $3}'|tr -d "\"") + add_local_filesystem_to_fstab "${UUID}" "${MOUNTPOINT}" + echo "Mounting disk ${PARTITION} on ${MOUNTPOINT}" + mount "${MOUNTPOINT}" +} + +function configure_nfs_server_and_export { + local MOUNTPOINT=${1} # E.g., /mahara + + echo "Installing nfs server..." + apt install -y nfs-kernel-server + + echo "Exporting ${MOUNTPOINT}..." + grep "^${MOUNTPOINT}" /etc/exports > /dev/null 2>&1 + if [ $? = "0" ]; then + echo "${MOUNTPOINT} is already exported. Returning..." + else + echo -e "\n${MOUNTPOINT} *(rw,sync,no_root_squash)" >> /etc/exports + systemctl restart nfs-kernel-server.service + fi +} + +#This function will set Mahara's siteurl variable +#to either the user supplied URL or will default +#to the Azure LB public dns +function configure_site_url { + local SITE_URL=${1} + local AZ_FQDN=${2} + if [ "${SITE_URL}" = "www.example.com" ]; then + siteFQDN=${AZ_FQDN} + fi +} + + +function configure_nfs_client_and_mount { + local NFS_SERVER=${1} # E.g., controller-vm-ab12cd + local NFS_DIR=${2} # E.g., /mahara + local MOUNTPOINT=${3} # E.g., /mahara + + apt install -y nfs-common + mkdir -p ${MOUNTPOINT} + + grep "^${NFS_SERVER}:${NFS_DIR}" /etc/fstab > /dev/null 2>&1 + if [ $? = "0" ]; then + echo "${NFS_SERVER}:${NFS_DIR} already in /etc/fstab... skipping to add" + else + echo -e "\n${NFS_SERVER}:${NFS_DIR} ${MOUNTPOINT} nfs auto 0 0" >> /etc/fstab + fi + mount ${MOUNTPOINT} +} + +SERVER_TIMESTAMP_FULLPATH="/mahara/html/mahara/.last_modified_time.mahara_on_azure" +LOCAL_TIMESTAMP_FULLPATH="/var/www/html/mahara/.last_modified_time.mahara_on_azure" + +# Create a script to sync /mahara/html/mahara (gluster/NFS) and /var/www/html/mahara (local) and set up a minutely cron job +# Should be called by root and only on a VMSS web frontend VM +function setup_html_local_copy_cron_job { + if [ "$(whoami)" != "root" ]; then + echo "${0}: Must be run as root!" + return 1 + fi + + local SYNC_SCRIPT_FULLPATH="/usr/local/bin/sync_mahara_html_local_copy_if_modified.sh" + mkdir -p $(dirname ${SYNC_SCRIPT_FULLPATH}) + + local SYNC_LOG_FULLPATH="/var/log/mahara-html-sync.log" + + cat < ${SYNC_SCRIPT_FULLPATH} +#!/bin/bash + +sleep \$((\$RANDOM%30)) + +if [ -f "$SERVER_TIMESTAMP_FULLPATH" ]; then + SERVER_TIMESTAMP=\$(cat $SERVER_TIMESTAMP_FULLPATH) + if [ -f "$LOCAL_TIMESTAMP_FULLPATH" ]; then + LOCAL_TIMESTAMP=\$(cat $LOCAL_TIMESTAMP_FULLPATH) + else + logger -p local2.notice -t mahara "Local timestamp file ($LOCAL_TIMESTAMP_FULLPATH) does not exist. Probably first time syncing? Continuing to sync." + mkdir -p /var/www/html + fi + if [ "\$SERVER_TIMESTAMP" != "\$LOCAL_TIMESTAMP" ]; then + logger -p local2.notice -t mahara "Server time stamp (\$SERVER_TIMESTAMP) is different from local time stamp (\$LOCAL_TIMESTAMP). Start syncing..." + if [[ \$(find $SYNC_LOG_FULLPATH -type f -size +20M 2> /dev/null) ]]; then + truncate -s 0 $SYNC_LOG_FULLPATH + fi + echo \$(date +%Y%m%d%H%M%S) >> $SYNC_LOG_FULLPATH + rsync -av --delete /mahara/html/mahara /var/www/html >> $SYNC_LOG_FULLPATH + fi +else + logger -p local2.notice -t mahara "Remote timestamp file ($SERVER_TIMESTAMP_FULLPATH) does not exist. Is /mahara mounted? Exiting with error." + exit 1 +fi +EOF + chmod 500 ${SYNC_SCRIPT_FULLPATH} + + local CRON_DESC_FULLPATH="/etc/cron.d/sync-mahara-html-local-copy" + cat < ${CRON_DESC_FULLPATH} +* * * * * root ${SYNC_SCRIPT_FULLPATH} +EOF + chmod 644 ${CRON_DESC_FULLPATH} +} + +LAST_MODIFIED_TIME_UPDATE_SCRIPT_FULLPATH="/usr/local/bin/update_last_modified_time_update.mahara_on_azure.sh" + +# Create a script to modify the last modified timestamp file (/mahara/html/mahara/last_modified_time.mahara_on_azure) +# Should be called by root and only on the controller VM. +# The mahara admin should run the generated script everytime the /mahara/html/mahara directory content is updated (e.g., mahara upgrade, config change or plugin install/upgrade) +function create_last_modified_time_update_script { + if [ "$(whoami)" != "root" ]; then + echo "${0}: Must be run as root!" + return 1 + fi + + mkdir -p $(dirname $LAST_MODIFIED_TIME_UPDATE_SCRIPT_FULLPATH) + cat < $LAST_MODIFIED_TIME_UPDATE_SCRIPT_FULLPATH +#!/bin/bash +echo \$(date +%Y%m%d%H%M%S) > $SERVER_TIMESTAMP_FULLPATH +EOF + + chmod +x $LAST_MODIFIED_TIME_UPDATE_SCRIPT_FULLPATH +} + +function run_once_last_modified_time_update_script { + $LAST_MODIFIED_TIME_UPDATE_SCRIPT_FULLPATH +} + + +# Long fail2ban config command moved here +function config_fail2ban +{ + cat < /etc/fail2ban/jail.conf +# Fail2Ban configuration file. +# +# This file was composed for Debian systems from the original one +# provided now under /usr/share/doc/fail2ban/examples/jail.conf +# for additional examples. +# +# Comments: use '#' for comment lines and ';' for inline comments +# +# To avoid merges during upgrades DO NOT MODIFY THIS FILE +# and rather provide your changes in /etc/fail2ban/jail.local +# + +# The DEFAULT allows a global definition of the options. They can be overridden +# in each jail afterwards. + +[DEFAULT] + +# "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not +# ban a host which matches an address in this list. Several addresses can be +# defined using space separator. +ignoreip = 127.0.0.1/8 + +# "bantime" is the number of seconds that a host is banned. +bantime = 600 + +# A host is banned if it has generated "maxretry" during the last "findtime" +# seconds. +findtime = 600 +maxretry = 3 + +# "backend" specifies the backend used to get files modification. +# Available options are "pyinotify", "gamin", "polling" and "auto". +# This option can be overridden in each jail as well. +# +# pyinotify: requires pyinotify (a file alteration monitor) to be installed. +# If pyinotify is not installed, Fail2ban will use auto. +# gamin: requires Gamin (a file alteration monitor) to be installed. +# If Gamin is not installed, Fail2ban will use auto. +# polling: uses a polling algorithm which does not require external libraries. +# auto: will try to use the following backends, in order: +# pyinotify, gamin, polling. +backend = auto + +# "usedns" specifies if jails should trust hostnames in logs, +# warn when reverse DNS lookups are performed, or ignore all hostnames in logs +# +# yes: if a hostname is encountered, a reverse DNS lookup will be performed. +# warn: if a hostname is encountered, a reverse DNS lookup will be performed, +# but it will be logged as a warning. +# no: if a hostname is encountered, will not be used for banning, +# but it will be logged as info. +usedns = warn + +# +# Destination email address used solely for the interpolations in +# jail.{conf,local} configuration files. +destemail = root@localhost + +# +# Name of the sender for mta actions +sendername = Fail2Ban + +# +# ACTIONS +# + +# Default banning action (e.g. iptables, iptables-new, +# iptables-multiport, shorewall, etc) It is used to define +# action_* variables. Can be overridden globally or per +# section within jail.local file +banaction = iptables-multiport + +# email action. Since 0.8.1 upstream fail2ban uses sendmail +# MTA for the mailing. Change mta configuration parameter to mail +# if you want to revert to conventional 'mail'. +mta = sendmail + +# Default protocol +protocol = tcp + +# Specify chain where jumps would need to be added in iptables-* actions +chain = INPUT + +# +# Action shortcuts. To be used to define action parameter + +# The simplest action to take: ban only +action_ = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"] + +# ban & send an e-mail with whois report to the destemail. +action_mw = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"] + %(mta)s-whois[name=%(__name__)s, dest="%(destemail)s", protocol="%(protocol)s", chain="%(chain)s", sendername="%(sendername)s"] + +# ban & send an e-mail with whois report and relevant log lines +# to the destemail. +action_mwl = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"] + %(mta)s-whois-lines[name=%(__name__)s, dest="%(destemail)s", logpath=%(logpath)s, chain="%(chain)s", sendername="%(sendername)s"] + +# Choose default action. To change, just override value of 'action' with the +# interpolation to the chosen action shortcut (e.g. action_mw, action_mwl, etc) in jail.local +# globally (section [DEFAULT]) or per specific section +action = %(action_)s + +# +# JAILS +# + +# Next jails corresponds to the standard configuration in Fail2ban 0.6 which +# was shipped in Debian. Enable any defined here jail by including +# +# [SECTION_NAME] +# enabled = true + +# +# in /etc/fail2ban/jail.local. +# +# Optionally you may override any other parameter (e.g. banaction, +# action, port, logpath, etc) in that section within jail.local + +[ssh] + +enabled = true +port = ssh +filter = sshd +logpath = /var/log/auth.log +maxretry = 6 + +[dropbear] + +enabled = false +port = ssh +filter = dropbear +logpath = /var/log/auth.log +maxretry = 6 + +# Generic filter for pam. Has to be used with action which bans all ports +# such as iptables-allports, shorewall +[pam-generic] + +enabled = false +# pam-generic filter can be customized to monitor specific subset of 'tty's +filter = pam-generic +# port actually must be irrelevant but lets leave it all for some possible uses +port = all +banaction = iptables-allports +port = anyport +logpath = /var/log/auth.log +maxretry = 6 + +[xinetd-fail] + +enabled = false +filter = xinetd-fail +port = all +banaction = iptables-multiport-log +logpath = /var/log/daemon.log +maxretry = 2 + + +[ssh-ddos] + +enabled = false +port = ssh +filter = sshd-ddos +logpath = /var/log/auth.log +maxretry = 6 + + +# Here we use blackhole routes for not requiring any additional kernel support +# to store large volumes of banned IPs + +[ssh-route] + +enabled = false +filter = sshd +action = route +logpath = /var/log/sshd.log +maxretry = 6 + +# Here we use a combination of Netfilter/Iptables and IPsets +# for storing large volumes of banned IPs +# +# IPset comes in two versions. See ipset -V for which one to use +# requires the ipset package and kernel support. +[ssh-iptables-ipset4] + +enabled = false +port = ssh +filter = sshd +banaction = iptables-ipset-proto4 +logpath = /var/log/sshd.log +maxretry = 6 + +[ssh-iptables-ipset6] + +enabled = false +port = ssh +filter = sshd +banaction = iptables-ipset-proto6 +logpath = /var/log/sshd.log +maxretry = 6 + + +# +# HTTP servers +# + +[apache] + +enabled = false +port = http,https +filter = apache-auth +logpath = /var/log/apache*/*error.log +maxretry = 6 + +# default action is now multiport, so apache-multiport jail was left +# for compatibility with previous (<0.7.6-2) releases +[apache-multiport] + +enabled = false +port = http,https +filter = apache-auth +logpath = /var/log/apache*/*error.log +maxretry = 6 + +[apache-noscript] + +enabled = false +port = http,https +filter = apache-noscript +logpath = /var/log/apache*/*error.log +maxretry = 6 + +[apache-overflows] + +enabled = false +port = http,https +filter = apache-overflows +logpath = /var/log/apache*/*error.log +maxretry = 2 + +# Ban attackers that try to use PHP's URL-fopen() functionality +# through GET/POST variables. - Experimental, with more than a year +# of usage in production environments. + +[php-url-fopen] + +enabled = false +port = http,https +filter = php-url-fopen +logpath = /var/www/*/logs/access_log + +# A simple PHP-fastcgi jail which works with lighttpd. +# If you run a lighttpd server, then you probably will +# find these kinds of messages in your error_log: +# ALERT – tried to register forbidden variable ‘GLOBALS’ +# through GET variables (attacker '1.2.3.4', file '/var/www/default/htdocs/index.php') + +[lighttpd-fastcgi] + +enabled = false +port = http,https +filter = lighttpd-fastcgi +logpath = /var/log/lighttpd/error.log + +# Same as above for mod_auth +# It catches wrong authentifications + +[lighttpd-auth] + +enabled = false +port = http,https +filter = suhosin +logpath = /var/log/lighttpd/error.log + +[nginx-http-auth] + +enabled = false +filter = nginx-http-auth +port = http,https +logpath = /var/log/nginx/error.log + +# Monitor roundcube server + +[roundcube-auth] + +enabled = false +filter = roundcube-auth +port = http,https +logpath = /var/log/roundcube/userlogins + + +[sogo-auth] + +enabled = false +filter = sogo-auth +port = http, https +# without proxy this would be: +# port = 20000 +logpath = /var/log/sogo/sogo.log + + +# +# FTP servers +# + +[vsftpd] + +enabled = false +port = ftp,ftp-data,ftps,ftps-data +filter = vsftpd +logpath = /var/log/vsftpd.log +# or overwrite it in jails.local to be +# logpath = /var/log/auth.log +# if you want to rely on PAM failed login attempts +# vsftpd's failregex should match both of those formats +maxretry = 6 + + +[proftpd] + +enabled = false +port = ftp,ftp-data,ftps,ftps-data +filter = proftpd +logpath = /var/log/proftpd/proftpd.log +maxretry = 6 + + +[pure-ftpd] + +enabled = false +port = ftp,ftp-data,ftps,ftps-data +filter = pure-ftpd +logpath = /var/log/syslog +maxretry = 6 + + +[wuftpd] + +enabled = false +port = ftp,ftp-data,ftps,ftps-data +filter = wuftpd +logpath = /var/log/syslog +maxretry = 6 + + +# +# Mail servers +# + +[postfix] + +enabled = false +port = smtp,ssmtp,submission +filter = postfix +logpath = /var/log/mail.log + + +[couriersmtp] + +enabled = false +port = smtp,ssmtp,submission +filter = couriersmtp +logpath = /var/log/mail.log + + +# +# Mail servers authenticators: might be used for smtp,ftp,imap servers, so +# all relevant ports get banned +# + +[courierauth] + +enabled = false +port = smtp,ssmtp,submission,imap2,imap3,imaps,pop3,pop3s +filter = courierlogin +logpath = /var/log/mail.log + + +[sasl] + +enabled = false +port = smtp,ssmtp,submission,imap2,imap3,imaps,pop3,pop3s +filter = postfix-sasl +# You might consider monitoring /var/log/mail.warn instead if you are +# running postfix since it would provide the same log lines at the +# "warn" level but overall at the smaller filesize. +logpath = /var/log/mail.log + +[dovecot] + +enabled = false +port = smtp,ssmtp,submission,imap2,imap3,imaps,pop3,pop3s +filter = dovecot +logpath = /var/log/mail.log + +# To log wrong MySQL access attempts add to /etc/my.cnf: +# log-error=/var/log/mysqld.log +# log-warning = 2 +[mysqld-auth] + +enabled = false +filter = mysqld-auth +port = 3306 +logpath = /var/log/mysqld.log + + +# DNS Servers + + +# These jails block attacks against named (bind9). By default, logging is off +# with bind9 installation. You will need something like this: +# +# logging { +# channel security_file { +# file "/var/log/named/security.log" versions 3 size 30m; +# severity dynamic; +# print-time yes; +# }; +# category security { +# security_file; +# }; +# }; +# +# in your named.conf to provide proper logging + +# !!! WARNING !!! +# Since UDP is connection-less protocol, spoofing of IP and imitation +# of illegal actions is way too simple. Thus enabling of this filter +# might provide an easy way for implementing a DoS against a chosen +# victim. See +# http://nion.modprobe.de/blog/archives/690-fail2ban-+-dns-fail.html +# Please DO NOT USE this jail unless you know what you are doing. +#[named-refused-udp] +# +#enabled = false +#port = domain,953 +#protocol = udp +#filter = named-refused +#logpath = /var/log/named/security.log + +[named-refused-tcp] + +enabled = false +port = domain,953 +protocol = tcp +filter = named-refused +logpath = /var/log/named/security.log + +# Multiple jails, 1 per protocol, are necessary ATM: +# see https://github.com/fail2ban/fail2ban/issues/37 +[asterisk-tcp] + +enabled = false +filter = asterisk +port = 5060,5061 +protocol = tcp +logpath = /var/log/asterisk/messages + +[asterisk-udp] + +enabled = false +filter = asterisk +port = 5060,5061 +protocol = udp +logpath = /var/log/asterisk/messages + + +# Jail for more extended banning of persistent abusers +# !!! WARNING !!! +# Make sure that your loglevel specified in fail2ban.conf/.local +# is not at DEBUG level -- which might then cause fail2ban to fall into +# an infinite loop constantly feeding itself with non-informative lines +[recidive] + +enabled = false +filter = recidive +logpath = /var/log/fail2ban.log +action = iptables-allports[name=recidive] + sendmail-whois-lines[name=recidive, logpath=/var/log/fail2ban.log] +bantime = 604800 ; 1 week +findtime = 86400 ; 1 day +maxretry = 5 +EOF +} diff --git a/mahara-autoscale-cache/scripts/install_elastic.sh b/mahara-autoscale-cache/scripts/install_elastic.sh new file mode 100644 index 000000000000..3c8398129081 --- /dev/null +++ b/mahara-autoscale-cache/scripts/install_elastic.sh @@ -0,0 +1,143 @@ +#!/bin/bash +# Custom Script for Linux +# +# The MIT License (MIT) +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. + +esClusterName=$1 +elasticvm1ip=$2 +elasticvm2ip=$3 +elasticvm3ip=$4 + +echo $esClusterName >> /tmp/vars.txt +echo $elasticvm1ip >> /tmp/vars.txt +echo $elasticvm2ip >> /tmp/vars.txt +echo $elasticvm3ip >> /tmp/vars.txt + +{ + + # make sure the system does automatic update + sudo apt-get -y update + sudo apt-get -y install unattended-upgrades + + # configure elastic search repository & install elastic search + wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - + echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list + sudo apt-get -y update + sudo apt-get -y install elasticsearch=5.5.0 + + # install the required packages + sudo apt-get install -y openjdk-8-jre openjdk-8-jdk default-jre default-jdk + + # Configure elasticsearch + cat < /etc/elasticsearch/elasticsearch.yml +# ======================== Elasticsearch Configuration ========================= +# +# NOTE: Elasticsearch comes with reasonable defaults for most settings. +# Before you set out to tweak and tune the configuration, make sure you +# understand what are you trying to accomplish and the consequences. +# +# The primary way of configuring a node is via this file. This template lists +# the most important settings you may want to configure for a production cluster. +# +# Please consult the documentation for further information on configuration options: +# https://www.elastic.co/guide/en/elasticsearch/reference/index.html +# +# ---------------------------------- Cluster ----------------------------------- +# +# Use a descriptive name for your cluster: +# +cluster.name: ${esClusterName} +# +# ------------------------------------ Node ------------------------------------ +# +# Use a descriptive name for the node: +# +node.name: \${HOSTNAME} +# +# Add custom attributes to the node: +# +#node.attr.rack: r1 +# +# ----------------------------------- Paths ------------------------------------ +# +# Path to directory where to store the data (separate multiple locations by comma): +# +#path.data: /path/to/data +# +# Path to log files: +# +#path.logs: /path/to/logs +# +# ----------------------------------- Memory ----------------------------------- +# +# Lock the memory on startup: +# +#bootstrap.memory_lock: true +# +# Make sure that the heap size is set to about half the memory available +# on the system and that the owner of the process is allowed to use this +# limit. +# +# Elasticsearch performs poorly when the system is swapping the memory. +# +# ---------------------------------- Network ----------------------------------- +# +# Set the bind address to a specific IP (IPv4 or IPv6): +# +network.host: [_eth0_, _local_] +# +# Set a custom port for HTTP: +# +#http.port: 9200 +# +# For more information, consult the network module documentation. +# +# --------------------------------- Discovery ---------------------------------- +# +# Pass an initial list of hosts to perform discovery when new node is started: +# The default list of hosts is ["127.0.0.1", "[::1]"] +# +discovery.zen.ping.unicast.hosts: ["$elasticvm1ip", "$elasticvm2ip", "$elasticvm3ip"] +# +# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): +# +discovery.zen.minimum_master_nodes: 3 +# +# For more information, consult the zen discovery module documentation. +# +# ---------------------------------- Gateway ----------------------------------- +# +# Block initial recovery after a full cluster restart until N nodes are started: +# +#gateway.recover_after_nodes: 3 +# +# For more information, consult the gateway module documentation. +# +# ---------------------------------- Various ----------------------------------- +# +# Require explicit names when deleting indices: +# +#action.destructive_requires_name: true +EOF + + service elasticsearch restart + +} > /tmp/setup.log diff --git a/mahara-autoscale-cache/scripts/install_gluster.sh b/mahara-autoscale-cache/scripts/install_gluster.sh new file mode 100644 index 000000000000..a3065974386b --- /dev/null +++ b/mahara-autoscale-cache/scripts/install_gluster.sh @@ -0,0 +1,283 @@ +#!/bin/bash + +# This script built for Ubuntu Server 16.04 LTS +# You can customize variables such as MOUNTPOINT, RAIDCHUNKSIZE and so on to your needs. +# You can also customize it to work with other Linux flavours and versions. +# If you customize it, copy it to either Azure blob storage or Github so that Azure +# custom script Linux VM extension can access it, and specify its location in the +# parameters of powershell script or runbook or Azure Resource Manager CRP template. + +AZUREVMOFFSET=4 + +NODENAME=$(hostname) +PEERNODEPREFIX=${1} +PEERNODEIPPREFIX=${2} +VOLUMENAME=${3} +NODEINDEX=${4} +NODECOUNT=${5} + +echo $NODENAME >> /tmp/vars.txt +echo $PEERNODEPREFIX >> /tmp/vars.txt +echo $PEERNODEIPPREFIX >> /tmp/vars.txt +echo $VOLUMENAME >> /tmp/vars.txt +echo $NODEINDEX >> /tmp/vars.txt +echo $NODECOUNT >> /tmp/vars.txt + + + +MOUNTPOINT="/datadrive" +RAIDCHUNKSIZE=128 + +RAIDDISK="/dev/md1" +RAIDPARTITION="/dev/md1p1" + +# An set of disks to ignore from partitioning and formatting +BLACKLIST="/dev/sda|/dev/sdb" + +# make sure the system does automatic update +sudo apt-get -y update +sudo apt-get -y install unattended-upgrades + +{ + check_os() { + grep ubuntu /proc/version > /dev/null 2>&1 + isubuntu=${?} + } + + scan_for_new_disks() { + # Looks for unpartitioned disks + declare -a RET + DEVS=($(ls -1 /dev/sd*|egrep -v "${BLACKLIST}"|egrep -v "[0-9]$")) + for DEV in "${DEVS[@]}"; + do + # Check each device if there is a "1" partition. If not, + # "assume" it is not partitioned. + if [ ! -b ${DEV}1 ]; + then + RET+="${DEV} " + fi + done + echo "${RET}" + } + + get_disk_count() { + DISKCOUNT=0 + for DISK in "${DISKS[@]}"; + do + DISKCOUNT+=1 + done; + echo "$DISKCOUNT" + } + + create_raid0_ubuntu() { + dpkg -s mdadm + if [ ${?} -eq 1 ]; + then + echo "installing mdadm" + sudo apt-get -y -q install mdadm + fi + echo "Creating raid0" + udevadm control --stop-exec-queue + echo "yes" | mdadm --create "$RAIDDISK" --name=data --level=0 --chunk="$RAIDCHUNKSIZE" --raid-devices="$DISKCOUNT" "${DISKS[@]}" + udevadm control --start-exec-queue + mdadm --detail --verbose --scan > /etc/mdadm.conf + } + + + do_partition() { + # This function creates one (1) primary partition on the + # disk, using all available space + DISK=${1} + echo "Partitioning disk $DISK" + echo -ne "n\np\n1\n\n\nw\n" | fdisk "${DISK}" + #> /dev/null 2>&1 + + # + # Use the bash-specific $PIPESTATUS to ensure we get the correct exit code + # from fdisk and not from echo + if [ ${PIPESTATUS[1]} -ne 0 ]; + then + echo "An error occurred partitioning ${DISK}" >&2 + echo "I cannot continue" >&2 + exit 2 + fi + } + + add_to_fstab() { + UUID=${1} + MOUNTPOINT=${2} + grep "${UUID}" /etc/fstab >/dev/null 2>&1 + if [ ${?} -eq 0 ]; + then + echo "Not adding ${UUID} to fstab again (it's already there!)" + else + LINE="UUID=${UUID} ${MOUNTPOINT} ext4 defaults,noatime 0 0" + echo -e "${LINE}" >> /etc/fstab + fi + } + + configure_disks() { + ls "${MOUNTPOINT}" + if [ ${?} -eq 0 ] + then + return + fi + DISKS=($(scan_for_new_disks)) + echo "Disks are ${DISKS[@]}" + declare -i DISKCOUNT + DISKCOUNT=$(get_disk_count) + echo "Disk count is $DISKCOUNT" + if [ $DISKCOUNT -gt 1 ]; + then + create_raid0_ubuntu + do_partition ${RAIDDISK} + PARTITION="${RAIDPARTITION}" + else + DISK="${DISKS[0]}" + do_partition ${DISK} + PARTITION=$(fdisk -l ${DISK}|grep -A 1 Device|tail -n 1|awk '{print $1}') + fi + + echo "Creating filesystem on ${PARTITION}." + mkfs -t ext4 ${PARTITION} + mkdir "${MOUNTPOINT}" + read UUID FS_TYPE < <(blkid -u filesystem ${PARTITION}|awk -F "[= ]" '{print $3" "$5}'|tr -d "\"") + add_to_fstab "${UUID}" "${MOUNTPOINT}" + echo "Mounting disk ${PARTITION} on ${MOUNTPOINT}" + mount "${MOUNTPOINT}" + } + + open_ports() { + index=0 + while [ $index -lt $NODECOUNT ]; do + echo "Node ${index}" + thisNode="${PEERNODEIPPREFIX}.$(($index+$AZUREVMOFFSET))" + echo "Node ${thisNode}" + + if [ $index -ne $NODEINDEX ]; then + echo "Node ${thisNode} is a peer" + iptables -I INPUT -p all -s "${thisNode}" -j ACCEPT + echo "${thisNode} ${thisNode}" >> /etc/hosts + else + echo "Node ${thisNode} is me" + echo "127.0.0.1 ${thisNode}" >> /etc/hosts + fi + let index++ + done + iptables-save + } + + disable_apparmor_ubuntu() { + /etc/init.d/apparmor teardown + update-rc.d -f apparmor remove + } + + configure_network() { + open_ports + disable_apparmor_ubuntu + } + + install_glusterfs_ubuntu() { + dpkg -l | grep glusterfs + if [ ${?} -eq 0 ]; + then + return + fi + + if [ ! -e /etc/apt/sources.list.d/gluster* ]; + then + echo "adding gluster ppa" + apt-get -y install python-software-properties + apt-add-repository -y ppa:gluster/glusterfs-3.8 + apt-get -y update + fi + + echo "installing gluster" + apt-get -y install glusterfs-server + + return + } + + configure_gluster() { + echo "gluster step1" + + if [ $isubuntu -eq 0 ]; + then + /etc/init.d/glusterfs-server status + if [ ${?} -ne 0 ]; + then + install_glusterfs_ubuntu + fi + /etc/init.d/glusterfs-server start + fi + + echo "gluster step2" + GLUSTERDIR="${MOUNTPOINT}/brick" + ls "${GLUSTERDIR}" + + if [ ${?} -ne 0 ]; + then + mkdir "${GLUSTERDIR}" + fi + + if [ $NODEINDEX -lt $(($NODECOUNT-1)) ]; + then + return + fi + + echo "gluster step3" + allNodes="${NODENAME}:${GLUSTERDIR}" + echo $allNodes + retry=10 + failed=1 + + while [ $retry -gt 0 ] && [ $failed -gt 0 ]; do + failed=0 + index=0 + echo retrying $retry + while [ $index -lt $(($NODECOUNT-1)) ]; do + glustervm=${PEERNODEPREFIX}${index} + echo $glustervm + + ping -c 3 $glustervm + gluster peer probe $glustervm + if [ ${?} -ne 0 ]; + then + failed=1 + echo "gluster peer probe $glustervm failed" + fi + + gluster peer status + gluster peer status | grep $glustervm + + if [ ${?} -ne 0 ]; + then + failed=1 + echo "gluster peer status $glustervm failed" + fi + + if [ $retry -eq 10 ]; then + allNodes="${allNodes} $glustervm:${GLUSTERDIR}" + fi + let index++ + done + sleep 30 + let retry-- + done + + echo "gluster step4" + echo $allnodes + sleep 60 + gluster volume create ${VOLUMENAME} rep 2 transport tcp ${allNodes} + gluster volume info + gluster volume start ${VOLUMENAME} + echo "gluster complete" + } + + # "main routine" + check_os + configure_network + configure_disks + configure_gluster + +} > /tmp/gluster-setup.log diff --git a/mahara-autoscale-cache/scripts/install_mahara.sh b/mahara-autoscale-cache/scripts/install_mahara.sh new file mode 100644 index 000000000000..4c77c53917ee --- /dev/null +++ b/mahara-autoscale-cache/scripts/install_mahara.sh @@ -0,0 +1,731 @@ +#!/bin/bash + +# The MIT License (MIT) +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. + +#parameters +{ + maharaVersion=${1} + glusterNode=${2} + glusterVolume=${3} + siteFQDN=${4} + httpsTermination=${5} + dbIP=${6} + maharadbname=${7} + maharadbuser=${8} + maharadbpass=${9} + adminpass=${10} + dbadminlogin=${11} + dbadminpass=${12} + wabsacctname=${13} + wabsacctkey=${14} + azuremaharadbuser=${15} + elasticVm1IP=${16} + dbServerType=${17} + fileServerType=${18} + thumbprintSslCert=${19} + thumbprintCaCert=${20} + searchType=${21} + azFQDN=${22} + + echo $maharaVersion >> /tmp/vars.txt + echo $glusterNode >> /tmp/vars.txt + echo $glusterVolume >> /tmp/vars.txt + echo $siteFQDN >> /tmp/vars.txt + echo $httpsTermination >> /tmp/vars.txt + echo $dbIP >> /tmp/vars.txt + echo $maharadbname >> /tmp/vars.txt + echo $maharadbuser >> /tmp/vars.txt + echo $maharadbpass >> /tmp/vars.txt + echo $adminpass >> /tmp/vars.txt + echo $dbadminlogin >> /tmp/vars.txt + echo $dbadminpass >> /tmp/vars.txt + echo $wabsacctname >> /tmp/vars.txt + echo $wabsacctkey >> /tmp/vars.txt + echo $azuremaharadbuser >> /tmp/vars.txt + echo $elasticVm1IP >> /tmp/vars.txt + echo $installElasticSearchSwitch >> /tmp/vars.txt + echo $dbServerType >> /tmp/vars.txt + echo $fileServerType >> /tmp/vars.txt + echo $thumbprintSslCert >> /tmp/vars.txt + echo $thumbprintCaCert >> /tmp/vars.txt + echo $searchType >> /tmp/vars.txt + echo $azFQDN >> /tmp/vars.txt + + + . ./helper_functions.sh + check_fileServerType_param $fileServerType + configure_site_url ${siteFQDN} ${azFQDN} + + if [ "$dbServerType" = "mysql" ]; then + mysqlIP=$dbIP + mysqladminlogin=$dbadminlogin + mysqladminpass=$dbadminpass + + elif [ "$dbServerType" = "postgres" ]; then + postgresIP=$dbIP + pgadminlogin=$dbadminlogin + pgadminpass=$dbadminpass + else + echo "Invalid dbServerType ($dbServerType) given. Only 'mysql' or 'postgres' is allowed. Exiting" + exit 1 + fi + + # make sure system does automatic updates and fail2ban + sudo apt-get -y update + sudo apt-get -y install unattended-upgrades fail2ban pwgen + config_fail2ban + + # create gluster mount point + mkdir -p /mahara + + export DEBIAN_FRONTEND=noninteractive + + if [ $fileServerType = "gluster" ]; then + # configure gluster repository & install gluster client + sudo add-apt-repository ppa:gluster/glusterfs-3.8 -y >> /tmp/apt1.log + elif [ $fileServerType = "nfs" ]; then + # configure NFS server and export + create_filesystem_with_raid /mahara /dev/md1 /dev/md1p1 + configure_nfs_server_and_export /mahara + fi + + sudo apt-get -y update >> /tmp/apt2.log + sudo apt-get -y --force-yes install rsyslog git >> /tmp/apt3.log + + if [ $fileServerType = "gluster" ]; then + sudo apt-get -y --force-yes install glusterfs-client >> /tmp/apt3.log + else # "azurefiles" + sudo apt-get -y --force-yes install cifs-utils >> /tmp/apt3.log + fi + + if [ $dbServerType = "mysql" ]; then + sudo apt-get -y --force-yes install mysql-client >> /tmp/apt3.log + else + sudo apt-get -y --force-yes install postgresql-client >> /tmp/apt3.log + fi + + # install azure cli & setup container + echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ wheezy main" | \ + sudo tee /etc/apt/sources.list.d/azure-cli.list + + sudo apt-key adv --keyserver packages.microsoft.com --recv-keys 52E16F86FEE04B979B07E28DB02C46DF417A0893 >> /tmp/apt4.log + sudo apt-get -y install apt-transport-https >> /tmp/apt4.log + sudo apt-get -y update > /dev/null + sudo apt-get -y install azure-cli >> /tmp/apt4.log + + if [ $fileServerType = "gluster" ]; then + # mount gluster files system + echo -e '\n\rInstalling GlusterFS on '$glusterNode':/'$glusterVolume '/mahara\n\r' + sudo mount -t glusterfs $glusterNode:/$glusterVolume /mahara + fi + + # install pre-requisites + sudo apt-get install -y --fix-missing python-software-properties unzip + + # install the entire stack + sudo apt-get -y --force-yes install nginx php-fpm varnish >> /tmp/apt5a.log + sudo apt-get -y --force-yes install php php-cli php-curl php-zip >> /tmp/apt5b.log + + # Mahara requirements + sudo apt-get -y update > /dev/null + sudo apt-get install -y --force-yes graphviz aspell php-common php-soap php-json > /tmp/apt6.log + sudo apt-get install -y --force-yes php-mbstring php-bcmath php-gd php-mysql php-xmlrpc php-intl php-xml php-bz2 >> /tmp/apt6.log + sudo apt-get install -y --force-yes npm nodejs-legacy + if [ $dbServerType = "mysql" ]; then + sudo apt-get install -y --force-yes php-mysql + else + sudo apt-get install -y --force-yes php-pgsql + fi + + + + + # Set up initial mahara dirs + mkdir -p /mahara/html/mahara + mkdir -p /mahara/certs + mkdir -p /mahara/maharadata + chown -R www-data.www-data /mahara + + # install Mahara + echo '#!/bin/bash + cd /tmp + + # downloading mahara + /usr/bin/curl -k --max-redirs 10 https://github.com/MaharaProject/mahara/archive/'$maharaVersion'.zip -L -o mahara.zip + /usr/bin/unzip -q mahara.zip + # setup theme files + cd mahara-'$maharaVersion' + /bin/mv -v * /mahara/html/mahara + ' > /tmp/setup-mahara.sh + + chmod 755 /tmp/setup-mahara.sh + sudo -u www-data /tmp/setup-mahara.sh >> /tmp/setupmahara.log + cd /mahara/html/mahara + npm install -g gulp + make css + + + # create cron entry + # It is scheduled for once per day. It can be changed as needed. + echo '* * * * * www-data /usr/bin/php /mahara/html/mahara/htdocs/lib/cron.php 2>&1 | /usr/bin/logger -plocal2.notice -t mahara' > /etc/cron.d/mahara-cron + + + # Build nginx config + cat < /etc/nginx/nginx.conf +user www-data; +worker_processes 2; +pid /run/nginx.pid; + +events { + worker_connections 768; +} + +http { + + sendfile on; + tcp_nopush on; + tcp_nodelay on; + keepalive_timeout 65; + types_hash_max_size 2048; + client_max_body_size 0; + proxy_max_temp_file_size 0; + server_names_hash_bucket_size 128; + fastcgi_buffers 16 16k; + fastcgi_buffer_size 32k; + proxy_buffering off; + include /etc/nginx/mime.types; + default_type application/octet-stream; + + access_log /var/log/nginx/access.log; + error_log /var/log/nginx/error.log; + + set_real_ip_from 127.0.0.1; + real_ip_header X-Forwarded-For; + ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE + ssl_prefer_server_ciphers on; + + gzip on; + gzip_disable "msie6"; + gzip_vary on; + gzip_proxied any; + gzip_comp_level 6; + gzip_buffers 16 8k; + gzip_http_version 1.1; + gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; + + map \$http_x_forwarded_proto \$fastcgi_https { + default \$https; + http ''; + https on; + } + + log_format mahara_combined '\$remote_addr - \$upstream_http_x_maharauser [\$time_local] ' + '"\$request" \$status \$body_bytes_sent ' + '"\$http_referer" "\$http_user_agent"'; + + + include /etc/nginx/conf.d/*.conf; + include /etc/nginx/sites-enabled/*; +} +EOF + + cat <> /etc/nginx/sites-enabled/${siteFQDN}.conf +server { + listen 81 default; + server_name ${siteFQDN}; + root /mahara/html/mahara/htdocs; + index index.php index.html index.htm; + + # Log to syslog + error_log syslog:server=localhost,facility=local1,severity=error,tag=mahara; + access_log syslog:server=localhost,facility=local1,severity=notice,tag=mahara mahara_combined; + + # Log XFF IP instead of varnish + set_real_ip_from 10.0.0.0/8; + set_real_ip_from 127.0.0.1; + set_real_ip_from 172.16.0.0/12; + set_real_ip_from 192.168.0.0/16; + real_ip_header X-Forwarded-For; + real_ip_recursive on; + + + # Redirect to https + if (\$http_x_forwarded_proto != https) { + return 301 https://\$server_name\$request_uri; + } + rewrite ^/(.*\.php)(/)(.*)$ /\$1?file=/\$3 last; + + + # Filter out php-fpm status page + location ~ ^/server-status { + return 404; + } + + location / { + try_files \$uri \$uri/index.php?\$query_string; + } + + location ~ [^/]\.php(/|$) { + fastcgi_split_path_info ^(.+?\.php)(/.*)$; + if (!-f \$document_root\$fastcgi_script_name) { + return 404; + } + + fastcgi_buffers 16 16k; + fastcgi_buffer_size 32k; + fastcgi_param SCRIPT_FILENAME \$document_root\$fastcgi_script_name; + fastcgi_pass unix:/run/php/php7.0-fpm.sock; + fastcgi_read_timeout 3600; + fastcgi_index index.php; + include fastcgi_params; + } +} + +server { + listen 443 ssl; + root /mahara/html/mahara/htdocs; + index index.php index.html index.htm; + + ssl on; + ssl_certificate /mahara/certs/nginx.crt; + ssl_certificate_key /mahara/certs/nginx.key; + + # Log to syslog + error_log syslog:server=localhost,facility=local1,severity=error,tag=mahara; + access_log syslog:server=localhost,facility=local1,severity=notice,tag=mahara mahara_combined; + + # Log XFF IP instead of varnish + set_real_ip_from 10.0.0.0/8; + set_real_ip_from 127.0.0.1; + set_real_ip_from 172.16.0.0/12; + set_real_ip_from 192.168.0.0/16; + real_ip_header X-Forwarded-For; + real_ip_recursive on; + + location / { + proxy_set_header Host \$host; + proxy_set_header HTTP_REFERER \$http_referer; + proxy_set_header X-Forwarded-Host \$host; + proxy_set_header X-Forwarded-Server \$host; + proxy_set_header X-Forwarded-Proto https; + proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for; + proxy_pass http://localhost:80; + } +} +EOF + + echo -e "Generating SSL self-signed certificate" + openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /mahara/certs/nginx.key -out /mahara/certs/nginx.crt -subj "/C=BR/ST=SP/L=SaoPaulo/O=IT/CN=$siteFQDN" + + # php config + PhpIni=/etc/php/7.0/fpm/php.ini + sed -i "s/memory_limit.*/memory_limit = 512M/" $PhpIni + sed -i "s/max_execution_time.*/max_execution_time = 18000/" $PhpIni + sed -i "s/max_input_vars.*/max_input_vars = 100000/" $PhpIni + sed -i "s/max_input_time.*/max_input_time = 600/" $PhpIni + sed -i "s/upload_max_filesize.*/upload_max_filesize = 1024M/" $PhpIni + sed -i "s/post_max_size.*/post_max_size = 1056M/" $PhpIni + sed -i "s/;opcache.use_cwd.*/opcache.use_cwd = 1/" $PhpIni + sed -i "s/;opcache.validate_timestamps.*/opcache.validate_timestamps = 1/" $PhpIni + sed -i "s/;opcache.save_comments.*/opcache.save_comments = 1/" $PhpIni + sed -i "s/;opcache.enable_file_override.*/opcache.enable_file_override = 0/" $PhpIni + sed -i "s/;opcache.enable.*/opcache.enable = 1/" $PhpIni + sed -i "s/;opcache.memory_consumption.*/opcache.memory_consumption = 256/" $PhpIni + sed -i "s/;opcache.max_accelerated_files.*/opcache.max_accelerated_files = 8000/" $PhpIni + + # fpm config - overload this + cat < /etc/php/7.0/fpm/pool.d/www.conf +[www] +user = www-data +group = www-data +listen = /run/php/php7.0-fpm.sock +listen.owner = www-data +listen.group = www-data +pm = dynamic +pm.max_children = 3000 +pm.start_servers = 20 +pm.min_spare_servers = 22 +pm.max_spare_servers = 30 +EOF + + # Remove the default site. Mahara is the only site we want + rm -f /etc/nginx/sites-enabled/default + + # restart Nginx + sudo service nginx restart + + # Configure varnish startup for 16.04 + VARNISHSTART="ExecStart=\/usr\/sbin\/varnishd -j unix,user=vcache -F -a :80 -T localhost:6082 -f \/etc\/varnish\/mahara.vcl -S \/etc\/varnish\/secret -s malloc,1024m -p thread_pool_min=200 -p thread_pool_max=4000 -p thread_pool_add_delay=2 -p timeout_linger=100 -p timeout_idle=30 -p send_timeout=1800 -p thread_pools=4 -p http_max_hdr=512 -p workspace_backend=512k" + sed -i "s/^ExecStart.*/${VARNISHSTART}/" /lib/systemd/system/varnish.service + + # Configure varnish VCL for mahara + cat <> /etc/varnish/mahara.vcl +vcl 4.0; + +import std; +import directors; +backend default { + .host = "localhost"; + .port = "81"; + .first_byte_timeout = 3600s; + .connect_timeout = 600s; + .between_bytes_timeout = 600s; +} + +sub vcl_recv { + # Varnish does not support SPDY or HTTP/2.0 untill we upgrade to Varnish 5.0 + if (req.method == "PRI") { + return (synth(405)); + } + + if (req.restarts == 0) { + if (req.http.X-Forwarded-For) { + set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; + } else { + set req.http.X-Forwarded-For = client.ip; + } + } + + # Non-RFC2616 or CONNECT HTTP requests methods filtered. Pipe requests directly to backend + if (req.method != "GET" && + req.method != "HEAD" && + req.method != "PUT" && + req.method != "POST" && + req.method != "TRACE" && + req.method != "OPTIONS" && + req.method != "DELETE") { + return (pipe); + } + + # Varnish don't mess with healthchecks + if (req.url ~ "^/admin/tool/heartbeat" || req.url ~ "^/healthcheck.php") + { + return (pass); + } + + # Pipe requests to backup.php straight to backend - prevents problem with progress bar long polling 503 problem + # This is here because backup.php is POSTing to itself - Filter before !GET&&!HEAD + if (req.url ~ "^/backup/backup.php") + { + return (pipe); + } + + # Varnish only deals with GET and HEAD by default. If request method is not GET or HEAD, pass request to backend + if (req.method != "GET" && req.method != "HEAD") { + return (pass); + } + + ### Rules for Mahara sites ### + if (req.url ~ "^/theme/" || + req.url ~ "^/js/" || + req.url ~ "^/lib/" || + req.url ~ "^/libs/" + ) { + return(hash); + } + + # Perform lookup for selected assets that we know are static but Mahara still needs a Cookie + if( req.url ~ "^/theme/.+\.(png|jpg|jpeg|gif|css|js|webp)" || + req.url ~ "^/lib/.+\.(png|jpg|jpeg|gif|css|js|webp)" || + req.url ~ "^/pluginfile.php/[0-9]+/course/overviewfiles/.+\.(?i)(png|jpg)$" + ) + { + # Set internal temporary header, based on which we will do things in vcl_backend_response + set req.http.X-Long-TTL = "86400"; + return (hash); + } + + # Serve requests to SCORM checknet.txt from varnish. Have to remove get parameters. Response body always contains "1" + if ( req.url ~ "^/lib/yui/build/mahara-core-checknet/assets/checknet.txt" ) + { + set req.url = regsub(req.url, "(.*)\?.*", "\1"); + unset req.http.Cookie; # Will go to hash anyway at the end of vcl_recv + set req.http.X-Long-TTL = "86400"; + return(hash); + } + + # Requests containing "Cookie" or "Authorization" headers will not be cached + if (req.http.Authorization || req.http.Cookie) { + return (pass); + } + + # Almost everything in Mahara correctly serves Cache-Control headers, if + # needed, which varnish will honor, but there are some which don't. Rather + # than explicitly finding them all and listing them here we just fail safe + # and don't cache unknown urls that get this far. + return (pass); +} + +sub vcl_backend_response { + # Happens after we have read the response headers from the backend. + # + # Here you clean the response headers, removing silly Set-Cookie headers + # and other mistakes your backend does. + + # We know these assest are static, let's set TTL >0 and allow client caching + if ( beresp.http.Cache-Control && bereq.http.X-Long-TTL && beresp.ttl < std.duration(bereq.http.X-Long-TTL + "s", 1s) && !beresp.http.WWW-Authenticate ) + { # If max-age < defined in X-Long-TTL header + set beresp.http.X-Orig-Pragma = beresp.http.Pragma; unset beresp.http.Pragma; + set beresp.http.X-Orig-Cache-Control = beresp.http.Cache-Control; + set beresp.http.Cache-Control = "public, max-age="+bereq.http.X-Long-TTL+", no-transform"; + set beresp.ttl = std.duration(bereq.http.X-Long-TTL + "s", 1s); + unset bereq.http.X-Long-TTL; + } + else if( !beresp.http.Cache-Control && bereq.http.X-Long-TTL && !beresp.http.WWW-Authenticate ) { + set beresp.http.X-Orig-Pragma = beresp.http.Pragma; unset beresp.http.Pragma; + set beresp.http.Cache-Control = "public, max-age="+bereq.http.X-Long-TTL+", no-transform"; + set beresp.ttl = std.duration(bereq.http.X-Long-TTL + "s", 1s); + unset bereq.http.X-Long-TTL; + } + else { # Don't touch headers if max-age > defined in X-Long-TTL header + unset bereq.http.X-Long-TTL; + } + + # Here we set X-Trace header, prepending it to X-Trace header received from backend. Useful for troubleshooting + if(beresp.http.x-trace && !beresp.was_304) { + set beresp.http.X-Trace = regsub(server.identity, "^([^.]+),?.*$", "\1")+"->"+regsub(beresp.backend.name, "^(.+)\((?:[0-9]{1,3}\.){3}([0-9]{1,3})\)","\1(\2)")+"->"+beresp.http.X-Trace; + } + else { + set beresp.http.X-Trace = regsub(server.identity, "^([^.]+),?.*$", "\1")+"->"+regsub(beresp.backend.name, "^(.+)\((?:[0-9]{1,3}\.){3}([0-9]{1,3})\)","\1(\2)"); + } + + # Gzip JS, CSS is done at the ngnix level doing it here dosen't respect the no buffer requsets + # if (beresp.http.content-type ~ "application/javascript.*" || beresp.http.content-type ~ "text") { + # set beresp.do_gzip = true; + #} +} + +sub vcl_deliver { + + # Revert back to original Cache-Control header before delivery to client + if (resp.http.X-Orig-Cache-Control) + { + set resp.http.Cache-Control = resp.http.X-Orig-Cache-Control; + unset resp.http.X-Orig-Cache-Control; + } + + # Revert back to original Pragma header before delivery to client + if (resp.http.X-Orig-Pragma) + { + set resp.http.Pragma = resp.http.X-Orig-Pragma; + unset resp.http.X-Orig-Pragma; + } + + # (Optional) X-Cache HTTP header will be added to responce, indicating whether object was retrieved from backend, or served from cache + if (obj.hits > 0) { + set resp.http.X-Cache = "HIT"; + } else { + set resp.http.X-Cache = "MISS"; + } + + # Set X-AuthOK header when totara/varnsih authentication succeeded + if (req.http.X-AuthOK) { + set resp.http.X-AuthOK = req.http.X-AuthOK; + } + + # If desired "Via: 1.1 Varnish-v4" response header can be removed from response + unset resp.http.Via; + unset resp.http.Server; + + return(deliver); +} + +sub vcl_backend_error { + # More comprehensive varnish error page. Display time, instance hostname, host header, url for easier troubleshooting. + set beresp.http.Content-Type = "text/html; charset=utf-8"; + set beresp.http.Retry-After = "5"; + synthetic( {" + + + + "} + beresp.status + " " + beresp.reason + {" + + +

Error "} + beresp.status + " " + beresp.reason + {"

+

"} + beresp.reason + {"

+

Guru Meditation:

+

Time: "} + now + {"

+

Node: "} + server.hostname + {"

+

Host: "} + bereq.http.host + {"

+

URL: "} + bereq.url + {"

+

XID: "} + bereq.xid + {"

+
+

Varnish cache server + + + "} ); + return (deliver); +} + +sub vcl_synth { + + #Redirect using '301 - Permanent Redirect', permanent redirect + if (resp.status == 851) { + set resp.http.Location = req.http.x-redir; + set resp.http.X-Varnish-Redirect = true; + set resp.status = 301; + return (deliver); + } + + #Redirect using '302 - Found', temporary redirect + if (resp.status == 852) { + set resp.http.Location = req.http.x-redir; + set resp.http.X-Varnish-Redirect = true; + set resp.status = 302; + return (deliver); + } + + #Redirect using '307 - Temporary Redirect', !GET&&!HEAD requests, dont change method on redirected requests + if (resp.status == 857) { + set resp.http.Location = req.http.x-redir; + set resp.http.X-Varnish-Redirect = true; + set resp.status = 307; + return (deliver); + } + + #Respond with 403 - Forbidden + if (resp.status == 863) { + set resp.http.X-Varnish-Error = true; + set resp.status = 403; + return (deliver); + } +} +EOF + + # Restart Varnish + systemctl daemon-reload + service varnish restart + + if [ $dbServerType = "mysql" ]; then + mysql -h $mysqlIP -u $mysqladminlogin -p${mysqladminpass} -e "CREATE DATABASE ${maharadbname} CHARACTER SET utf8;" + mysql -h $mysqlIP -u $mysqladminlogin -p${mysqladminpass} -e "GRANT ALL ON ${maharadbname}.* TO ${maharadbuser} IDENTIFIED BY '${maharadbpass}';" + + echo "mysql -h $mysqlIP -u $mysqladminlogin -p${mysqladminpass} -e \"CREATE DATABASE ${maharadbname};\"" >> /tmp/debug + echo "mysql -h $mysqlIP -u $mysqladminlogin -p${mysqladminpass} -e \"GRANT ALL ON ${maharadbname}.* TO ${maharadbuser} IDENTIFIED BY '${maharadbpass}';\"" >> /tmp/debug + else + # Create postgres db + echo "${postgresIP}:5432:postgres:${pgadminlogin}:${pgadminpass}" > /root/.pgpass + chmod 600 /root/.pgpass + psql -h $postgresIP -U $pgadminlogin -c "CREATE DATABASE ${maharadbname};" postgres + psql -h $postgresIP -U $pgadminlogin -c "CREATE USER ${maharadbuser} WITH PASSWORD '${maharadbpass}';" postgres + psql -h $postgresIP -U $pgadminlogin -c "GRANT ALL ON DATABASE ${maharadbname} TO ${maharadbuser};" postgres + # Need to preserve pg auth file for updating database later, if elasticsearch option was set. + if [ $searchType = "none" ]; then + rm -f /root/.pgpass + fi + fi + + # Master config for syslog + mkdir /var/log/sitelogs + chown syslog.adm /var/log/sitelogs + cat <> /etc/rsyslog.conf +\$ModLoad imudp +\$UDPServerRun 514 +EOF + cat <> /etc/rsyslog.d/40-sitelogs.conf +local1.* /var/log/sitelogs/mahara/access.log +local1.err /var/log/sitelogs/mahara/error.log +local2.* /var/log/sitelogs/mahara/cron.log +EOF + service rsyslog restart + +# Fire off mahara setup +PWGEN=`which pwgen` +SALT=`${PWGEN} 32 1` +URLSECRET=`${PWGEN} 8 1` + + cat <> /mahara/html/mahara/htdocs/config.php +dbtype = '$dbServerType'; +\$cfg->dbhost = '$dbIP'; +\$cfg->dbport = null; +\$cfg->dbname = '$maharadbname'; +\$cfg->dbuser = '$azuremaharadbuser'; +\$cfg->dbpass = '$maharadbpass'; +\$cfg->dataroot = '/mahara/maharadata'; +\$cfg->wwwroot = 'https://$siteFQDN'; +\$cfg->passwordsaltmain = '$SALT'; +\$cfg->productionmode = true; +\$cfg->sslproxy = true; +\$cfg->sendemail = true; +\$cfg->urlsecret = '$URLSECRET'; +\$cfg->directorypermissions = 0750; + +EOF + +cd /tmp; sudo -u www-data /usr/bin/php /mahara/html/mahara/htdocs/admin/cli/install.php --adminpassword="$adminpass" --adminemail=admin@"$siteFQDN" --sitename='Mahara Portfolio' || true + +if [ $searchType = "elastic" ]; then + echo "\$cfg->plugin_search_elasticsearch_indexname = 'mahara';" >> /mahara/html/mahara/htdocs/config.php + echo "\$cfg->plugin_search_elasticsearch_host = '$elasticVm1IP';" >> /mahara/html/mahara/htdocs/config.php + + if [ $dbServerType = "mysql" ]; then + mysql -h $mysqlIP -u $mysqladminlogin -p${mysqladminpass} ${maharadbname} -e "update config set value = 'elasticsearch' where field = 'searchplugin';" + + else + psql -h $postgresIP -U $pgadminlogin -d ${maharadbname} -c "update config set value = 'elasticsearch' where field = 'searchplugin';" postgres + rm -f /root/.pgpass + fi +fi + + echo -e "\n\rDone! Installation completed!\n\r" + + # Set up cronned sql dump + cat < /etc/cron.d/sql-backup + 22 02 * * * root /usr/bin/mysqldump -h $dbIP -u ${azuremaharadbuser} -p'${maharadbpass}' --databases ${maharadbname} | gzip > /mahara/db-backup.sql.gz +EOF + + # Turning off services we don't need the jumpbox running + service nginx stop + service php7.0-fpm stop + service varnish stop + service varnishncsa stop + service varnishlog stop + + if [ $fileServerType = "gluster" -o $fileServerType = "nfs" ]; then + # make sure Mahara can read its code directory but not write + sudo chown -R root.root /mahara/html/mahara + sudo find /mahara/html/mahara -type f -exec chmod 644 '{}' \; + sudo find /mahara/html/mahara -type d -exec chmod 755 '{}' \; + fi + + if [ $fileServerType = "azurefiles" ]; then + # Delayed copy of mahara installation to the Azure Files share + + # First rename mahara directory to something else + mv /mahara /mahara_old_delete_me + # Then create the mahara share + echo -e '\n\rCreating an Azure Files share for mahara' + create_azure_files_mahara_share $wabsacctname $wabsacctkey /tmp/wabs.log + # Set up and mount Azure Files share. Must be done after nginx is installed because of www-data user/group + echo -e '\n\rSetting up and mounting Azure Files share on //'$wabsacctname'.file.core.windows.net/mahara on /mahara\n\r' + setup_and_mount_azure_files_mahara_share $wabsacctname $wabsacctkey + # Move the local installation over to the Azure Files + echo -e '\n\rMoving locally installed mahara over to Azure Files' + cp -a /mahara_old_delete_me/* /mahara || true # Ignore case sensitive directory copy failure + # rm -rf /mahara_old_delete_me || true # Keep the files just in case + fi + + create_last_modified_time_update_script + run_once_last_modified_time_update_script + +} > /tmp/install.log diff --git a/mahara-autoscale-cache/scripts/setup_webserver.sh b/mahara-autoscale-cache/scripts/setup_webserver.sh new file mode 100644 index 000000000000..c2bc63fa4031 --- /dev/null +++ b/mahara-autoscale-cache/scripts/setup_webserver.sh @@ -0,0 +1,645 @@ +# Custom Script for Linux for Mahara/Azure + +#!/bin/bash + +# The MIT License (MIT) +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. + +glusterNode=${1} +glusterVolume=${2} +siteFQDN=${3} +httpsTermination=${4} +syslogserver=${5} +webServerType=${6} +fileServerType=${7} +storageAccountName=${8} +storageAccountKey=${9} +nfsVmName=${10} +htmlLocalCopySwitch=${11} +azFQDN=${12} + +echo $glusterNode >> /tmp/vars.txt +echo $glusterVolume >> /tmp/vars.txt +echo $siteFQDN >> /tmp/vars.txt +echo $httpsTermination >> /tmp/vars.txt +echo $syslogserver >> /tmp/vars.txt +echo $webServerType >> /tmp/vars.txt +echo $fileServerType >> /tmp/vars.txt +echo $storageAccountName >> /tmp/vars.txt +echo $storageAccountKey >> /tmp/vars.txt +echo $nfsVmName >> /tmp/vars.txt +echo $htmlLocalCopySwitch >> /tmp/vars.txt +echo $azFQDN >> /tmp/vars.txt + +. ./helper_functions.sh + +configure_site_url ${siteFQDN} ${azFQDN} + +check_fileServerType_param $fileServerType + +{ + # make sure the system does automatic update + sudo apt-get -y update + sudo apt-get -y install unattended-upgrades + + # install pre-requisites + sudo apt-get -y install python-software-properties unzip rsyslog + + sudo apt-get -y install postgresql-client mysql-client git + + if [ $fileServerType = "gluster" ]; then + #configure gluster repository & install gluster client + sudo add-apt-repository ppa:gluster/glusterfs-3.8 -y + sudo apt-get -y update + sudo apt-get -y install glusterfs-client + elif [ "$fileServerType" = "azurefiles" ]; then + sudo apt-get -y install cifs-utils + fi + + # install the base stack + sudo apt-get -y install varnish php php-cli php-curl php-zip php-pear php-mbstring php-dev mcrypt + + if [ "$webServerType" = "nginx" -o "$httpsTermination" = "VMSS" ]; then + sudo apt-get -y install nginx + fi + + if [ "$webServerType" = "apache" ]; then + # install apache pacakges + sudo apt-get -y install apache2 libapache2-mod-php + else + # for nginx-only option + sudo apt-get -y install php-fpm + fi + + # Mahara requirements + sudo apt-get install -y graphviz aspell php-soap php-json php-bcmath php-gd php-pgsql php-mysql php-xmlrpc php-intl php-xml php-bz2 + install_php_sql_driver + + if [ $fileServerType = "gluster" ]; then + # Mount gluster fs for /mahara + sudo mkdir -p /mahara + sudo chown www-data /mahara + sudo chmod 770 /mahara + sudo echo -e 'mount -t glusterfs '$glusterNode':/'$glusterVolume' /mahara' + sudo mount -t glusterfs $glusterNode:/$glusterVolume /mahara + sudo echo -e $glusterNode':/'$glusterVolume' /mahara glusterfs defaults,_netdev,log-level=WARNING,log-file=/var/log/gluster.log 0 0' >> /etc/fstab + sudo mount -a + elif [ $fileServerType = "nfs" ]; then + configure_nfs_client_and_mount $nfsVmName /mahara /mahara + else # "azurefiles" + setup_and_mount_azure_files_mahara_share $storageAccountName $storageAccountKey + fi + + # Configure syslog to forward + cat <> /etc/rsyslog.conf +\$ModLoad imudp +\$UDPServerRun 514 +EOF + cat <> /etc/rsyslog.d/40-remote.conf +local1.* @${syslogserver}:514 +local2.* @${syslogserver}:514 +EOF + service syslog restart + + if [ "$webServerType" = "nginx" -o "$httpsTermination" = "VMSS" ]; then + # Build nginx config + cat < /etc/nginx/nginx.conf +user www-data; +worker_processes 2; +pid /run/nginx.pid; + +events { + worker_connections 2048; +} + +http { + + sendfile on; + tcp_nopush on; + tcp_nodelay on; + keepalive_timeout 65; + types_hash_max_size 2048; + client_max_body_size 0; + proxy_max_temp_file_size 0; + server_names_hash_bucket_size 128; + fastcgi_buffers 16 16k; + fastcgi_buffer_size 32k; + proxy_buffering off; + include /etc/nginx/mime.types; + default_type application/octet-stream; + + access_log /var/log/nginx/access.log; + error_log /var/log/nginx/error.log; + + set_real_ip_from 127.0.0.1; + real_ip_header X-Forwarded-For; + ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE + ssl_prefer_server_ciphers on; + + gzip on; + gzip_disable "msie6"; + gzip_vary on; + gzip_proxied any; + gzip_comp_level 6; + gzip_buffers 16 8k; + gzip_http_version 1.1; + gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; +EOF + if [ "$httpsTermination" != "None" ]; then + cat <> /etc/nginx/nginx.conf + map \$http_x_forwarded_proto \$fastcgi_https { + default \$https; + http ''; + https on; + } +EOF + fi + + cat <> /etc/nginx/nginx.conf + log_format mahara_combined '\$remote_addr - \$upstream_http_x_maharauser [\$time_local] ' + '"\$request" \$status \$body_bytes_sent ' + '"\$http_referer" "\$http_user_agent"'; + + + include /etc/nginx/conf.d/*.conf; + include /etc/nginx/sites-enabled/*; +} +EOF + fi # if [ "$webServerType" = "nginx" -o "$httpsTermination" = "VMSS" ]; + + # Set up html dir local copy if specified + htmlRootDir="/mahara/html/mahara/htdocs" + if [ "$htmlLocalCopySwitch" = "True" ]; then + mkdir -p /var/www/html + rsync -av --delete /mahara/html/mahara /var/www/html + htmlRootDir="/var/www/html/mahara/htdocs" + setup_html_local_copy_cron_job + fi + + if [ "$httpsTermination" = "VMSS" ]; then + # Configure nginx/https + cat <> /etc/nginx/sites-enabled/${siteFQDN}.conf +server { + listen 443 ssl; + root ${htmlRootDir}; + index index.php index.html index.htm; + + ssl on; + ssl_certificate /mahara/certs/nginx.crt; + ssl_certificate_key /mahara/certs/nginx.key; + + # Log to syslog + error_log syslog:server=localhost,facility=local1,severity=error,tag=mahara; + access_log syslog:server=localhost,facility=local1,severity=notice,tag=mahara mahara_combined; + + # Log XFF IP instead of varnish + set_real_ip_from 10.0.0.0/8; + set_real_ip_from 127.0.0.1; + set_real_ip_from 172.16.0.0/12; + set_real_ip_from 192.168.0.0/16; + real_ip_header X-Forwarded-For; + real_ip_recursive on; + + location / { + proxy_set_header Host \$host; + proxy_set_header HTTP_REFERER \$http_referer; + proxy_set_header X-Forwarded-Host \$host; + proxy_set_header X-Forwarded-Server \$host; + proxy_set_header X-Forwarded-Proto https; + proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for; + proxy_pass http://localhost:80; + + proxy_connect_timeout 3600; + proxy_send_timeout 3600; + proxy_read_timeout 3600; + send_timeout 3600; + } +} +EOF + fi + + if [ "$webServerType" = "nginx" ]; then + cat <> /etc/nginx/sites-enabled/${siteFQDN}.conf +server { + listen 81 default; + server_name ${siteFQDN}; + root ${htmlRootDir}; + index index.php index.html index.htm; + + # Log to syslog + error_log syslog:server=localhost,facility=local1,severity=error,tag=mahara; + access_log syslog:server=localhost,facility=local1,severity=notice,tag=mahara mahara_combined; + + # Log XFF IP instead of varnish + set_real_ip_from 10.0.0.0/8; + set_real_ip_from 127.0.0.1; + set_real_ip_from 172.16.0.0/12; + set_real_ip_from 192.168.0.0/16; + real_ip_header X-Forwarded-For; + real_ip_recursive on; +EOF + if [ "$httpsTermination" != "None" ]; then + cat <> /etc/nginx/sites-enabled/${siteFQDN}.conf + # Redirect to https + if (\$http_x_forwarded_proto != https) { + return 301 https://\$server_name\$request_uri; + } + rewrite ^/(.*\.php)(/)(.*)$ /\$1?file=/\$3 last; +EOF + fi + cat <> /etc/nginx/sites-enabled/${siteFQDN}.conf + # Filter out php-fpm status page + location ~ ^/server-status { + return 404; + } + + location / { + try_files \$uri \$uri/index.php?\$query_string; + } + + location ~ [^/]\.php(/|$) { + fastcgi_split_path_info ^(.+?\.php)(/.*)$; + if (!-f \$document_root\$fastcgi_script_name) { + return 404; + } + + fastcgi_buffers 16 16k; + fastcgi_buffer_size 32k; + fastcgi_param SCRIPT_FILENAME \$document_root\$fastcgi_script_name; + fastcgi_pass unix:/run/php/php7.0-fpm.sock; + fastcgi_read_timeout 3600; + fastcgi_index index.php; + include fastcgi_params; + } +} + +EOF + fi # if [ "$webServerType" = "nginx" ]; + + if [ "$webServerType" = "apache" ]; then + # Configure Apache/php + sed -i "s/Listen 80/Listen 81/" /etc/apache2/ports.conf + a2enmod rewrite && a2enmod remoteip && a2enmod headers + + cat <> /etc/apache2/sites-enabled/${siteFQDN}.conf + + ServerName ${siteFQDN} + + ServerAdmin webmaster@localhost + DocumentRoot ${htmlRootDir} + + + Options FollowSymLinks + AllowOverride All + Require all granted + +EOF + if [ "$httpsTermination" != "None" ]; then + cat <> /etc/apache2/sites-enabled/${siteFQDN}.conf + # Redirect unencrypted direct connections to HTTPS + + RewriteEngine on + RewriteCond %{HTTP:X-Forwarded-Proto} !https [NC] + RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301] + +EOF + fi + cat <> /etc/apache2/sites-enabled/${siteFQDN}.conf + # Log X-Forwarded-For IP address instead of varnish (127.0.0.1) + SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded + LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined + LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" forwarded + ErrorLog "|/usr/bin/logger -t mahara -p local1.error" + CustomLog "|/usr/bin/logger -t mahara -p local1.notice" combined env=!forwarded + CustomLog "|/usr/bin/logger -t mahara -p local1.notice" forwarded env=forwarded + + +EOF + fi # if [ "$webServerType" = "apache" ]; + + # php config + if [ "$webServerType" = "apache" ]; then + PhpIni=/etc/php/7.0/apache2/php.ini + else + PhpIni=/etc/php/7.0/fpm/php.ini + fi + sed -i "s/memory_limit.*/memory_limit = 512M/" $PhpIni + sed -i "s/max_execution_time.*/max_execution_time = 18000/" $PhpIni + sed -i "s/max_input_vars.*/max_input_vars = 100000/" $PhpIni + sed -i "s/max_input_time.*/max_input_time = 600/" $PhpIni + sed -i "s/upload_max_filesize.*/upload_max_filesize = 1024M/" $PhpIni + sed -i "s/post_max_size.*/post_max_size = 1056M/" $PhpIni + sed -i "s/;opcache.use_cwd.*/opcache.use_cwd = 1/" $PhpIni + sed -i "s/;opcache.validate_timestamps.*/opcache.validate_timestamps = 1/" $PhpIni + sed -i "s/;opcache.save_comments.*/opcache.save_comments = 1/" $PhpIni + sed -i "s/;opcache.enable_file_override.*/opcache.enable_file_override = 0/" $PhpIni + sed -i "s/;opcache.enable.*/opcache.enable = 1/" $PhpIni + sed -i "s/;opcache.memory_consumption.*/opcache.memory_consumption = 256/" $PhpIni + sed -i "s/;opcache.max_accelerated_files.*/opcache.max_accelerated_files = 8000/" $PhpIni + + # Remove the default site. Mahara is the only site we want + rm -f /etc/nginx/sites-enabled/default + if [ "$webServerType" = "apache" ]; then + rm -f /etc/apache2/sites-enabled/000-default.conf + fi + + if [ "$webServerType" = "nginx" -o "$httpsTermination" = "VMSS" ]; then + # restart Nginx + sudo service nginx restart + fi + + if [ "$webServerType" = "nginx" ]; then + # fpm config - overload this + cat < /etc/php/7.0/fpm/pool.d/www.conf +[www] +user = www-data +group = www-data +listen = /run/php/php7.0-fpm.sock +listen.owner = www-data +listen.group = www-data +pm = dynamic +pm.max_children = 3000 +pm.start_servers = 20 +pm.min_spare_servers = 20 +pm.max_spare_servers = 30 +EOF + + # Restart fpm + service php7.0-fpm restart + fi + + if [ "$webServerType" = "apache" ]; then + sudo service apache2 restart + fi + + # Configure varnish startup for 16.04 + VARNISHSTART="ExecStart=\/usr\/sbin\/varnishd -j unix,user=vcache -F -a :80 -T localhost:6082 -f \/etc\/varnish\/mahara.vcl -S \/etc\/varnish\/secret -s malloc,1024m -p thread_pool_min=200 -p thread_pool_max=4000 -p thread_pool_add_delay=2 -p timeout_linger=100 -p timeout_idle=30 -p send_timeout=1800 -p thread_pools=4 -p http_max_hdr=512 -p workspace_backend=512k" + sed -i "s/^ExecStart.*/${VARNISHSTART}/" /lib/systemd/system/varnish.service + + # Configure varnish VCL for Mahara + cat <> /etc/varnish/mahara.vcl +vcl 4.0; + +import std; +import directors; +backend default { + .host = "localhost"; + .port = "81"; + .first_byte_timeout = 3600s; + .connect_timeout = 600s; + .between_bytes_timeout = 600s; +} + +sub vcl_recv { + # Varnish does not support SPDY or HTTP/2.0 untill we upgrade to Varnish 5.0 + if (req.method == "PRI") { + return (synth(405)); + } + + if (req.restarts == 0) { + if (req.http.X-Forwarded-For) { + set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; + } else { + set req.http.X-Forwarded-For = client.ip; + } + } + + # Non-RFC2616 or CONNECT HTTP requests methods filtered. Pipe requests directly to backend + if (req.method != "GET" && + req.method != "HEAD" && + req.method != "PUT" && + req.method != "POST" && + req.method != "TRACE" && + req.method != "OPTIONS" && + req.method != "DELETE") { + return (pipe); + } + + # Varnish don't mess with healthchecks + if (req.url ~ "^/admin/tool/heartbeat" || req.url ~ "^/healthcheck.php") + { + return (pass); + } + + # Pipe requests to backup.php straight to backend - prevents problem with progress bar long polling 503 problem + # This is here because backup.php is POSTing to itself - Filter before !GET&&!HEAD + if (req.url ~ "^/backup/backup.php") + { + return (pipe); + } + + # Varnish only deals with GET and HEAD by default. If request method is not GET or HEAD, pass request to backend + if (req.method != "GET" && req.method != "HEAD") { + return (pass); + } + + ### Rules for Mahara and Totara sites ### + # Mahara doesn't require Cookie to serve following assets. Remove Cookie header from request, so it will be looked up. + if ( req.url ~ "^/altlogin/.+/.+\.(png|jpg|jpeg|gif|css|js|webp)$" || + req.url ~ "^/pix/.+\.(png|jpg|jpeg|gif)$" || + req.url ~ "^/theme/font.php" || + req.url ~ "^/theme/image.php" || + req.url ~ "^/theme/javascript.php" || + req.url ~ "^/theme/jquery.php" || + req.url ~ "^/theme/styles.php" || + req.url ~ "^/theme/yui" || + req.url ~ "^/lib/javascript.php/-1/" || + req.url ~ "^/lib/requirejs.php/-1/" + ) + { + set req.http.X-Long-TTL = "86400"; + unset req.http.Cookie; + return(hash); + } + + # Perform lookup for selected assets that we know are static but Mahara still needs a Cookie + if( req.url ~ "^/theme/.+\.(png|jpg|jpeg|gif|css|js|webp)" || + req.url ~ "^/lib/.+\.(png|jpg|jpeg|gif|css|js|webp)" || + req.url ~ "^/pluginfile.php/[0-9]+/course/overviewfiles/.+\.(?i)(png|jpg)$" + ) + { + # Set internal temporary header, based on which we will do things in vcl_backend_response + set req.http.X-Long-TTL = "86400"; + return (hash); + } + + # Serve requests to SCORM checknet.txt from varnish. Have to remove get parameters. Response body always contains "1" + if ( req.url ~ "^/lib/yui/build/mahara-core-checknet/assets/checknet.txt" ) + { + set req.url = regsub(req.url, "(.*)\?.*", "\1"); + unset req.http.Cookie; # Will go to hash anyway at the end of vcl_recv + set req.http.X-Long-TTL = "86400"; + return(hash); + } + + # Requests containing "Cookie" or "Authorization" headers will not be cached + if (req.http.Authorization || req.http.Cookie) { + return (pass); + } + + # Almost everything in Mahara correctly serves Cache-Control headers, if + # needed, which varnish will honor, but there are some which don't. Rather + # than explicitly finding them all and listing them here we just fail safe + # and don't cache unknown urls that get this far. + return (pass); +} + +sub vcl_backend_response { + # Happens after we have read the response headers from the backend. + # + # Here you clean the response headers, removing silly Set-Cookie headers + # and other mistakes your backend does. + + # We know these assest are static, let's set TTL >0 and allow client caching + if ( beresp.http.Cache-Control && bereq.http.X-Long-TTL && beresp.ttl < std.duration(bereq.http.X-Long-TTL + "s", 1s) && !beresp.http.WWW-Authenticate ) + { # If max-age < defined in X-Long-TTL header + set beresp.http.X-Orig-Pragma = beresp.http.Pragma; unset beresp.http.Pragma; + set beresp.http.X-Orig-Cache-Control = beresp.http.Cache-Control; + set beresp.http.Cache-Control = "public, max-age="+bereq.http.X-Long-TTL+", no-transform"; + set beresp.ttl = std.duration(bereq.http.X-Long-TTL + "s", 1s); + unset bereq.http.X-Long-TTL; + } + else if( !beresp.http.Cache-Control && bereq.http.X-Long-TTL && !beresp.http.WWW-Authenticate ) { + set beresp.http.X-Orig-Pragma = beresp.http.Pragma; unset beresp.http.Pragma; + set beresp.http.Cache-Control = "public, max-age="+bereq.http.X-Long-TTL+", no-transform"; + set beresp.ttl = std.duration(bereq.http.X-Long-TTL + "s", 1s); + unset bereq.http.X-Long-TTL; + } + else { # Don't touch headers if max-age > defined in X-Long-TTL header + unset bereq.http.X-Long-TTL; + } + + # Here we set X-Trace header, prepending it to X-Trace header received from backend. Useful for troubleshooting + if(beresp.http.x-trace && !beresp.was_304) { + set beresp.http.X-Trace = regsub(server.identity, "^([^.]+),?.*$", "\1")+"->"+regsub(beresp.backend.name, "^(.+)\((?:[0-9]{1,3}\.){3}([0-9]{1,3})\)","\1(\2)")+"->"+beresp.http.X-Trace; + } + else { + set beresp.http.X-Trace = regsub(server.identity, "^([^.]+),?.*$", "\1")+"->"+regsub(beresp.backend.name, "^(.+)\((?:[0-9]{1,3}\.){3}([0-9]{1,3})\)","\1(\2)"); + } + + # Gzip JS, CSS is done at the ngnix level doing it here dosen't respect the no buffer requsets + # if (beresp.http.content-type ~ "application/javascript.*" || beresp.http.content-type ~ "text") { + # set beresp.do_gzip = true; + #} +} + +sub vcl_deliver { + + # Revert back to original Cache-Control header before delivery to client + if (resp.http.X-Orig-Cache-Control) + { + set resp.http.Cache-Control = resp.http.X-Orig-Cache-Control; + unset resp.http.X-Orig-Cache-Control; + } + + # Revert back to original Pragma header before delivery to client + if (resp.http.X-Orig-Pragma) + { + set resp.http.Pragma = resp.http.X-Orig-Pragma; + unset resp.http.X-Orig-Pragma; + } + + # (Optional) X-Cache HTTP header will be added to responce, indicating whether object was retrieved from backend, or served from cache + if (obj.hits > 0) { + set resp.http.X-Cache = "HIT"; + } else { + set resp.http.X-Cache = "MISS"; + } + + # Set X-AuthOK header when totara/varnsih authentication succeeded + if (req.http.X-AuthOK) { + set resp.http.X-AuthOK = req.http.X-AuthOK; + } + + # If desired "Via: 1.1 Varnish-v4" response header can be removed from response + unset resp.http.Via; + unset resp.http.Server; + + return(deliver); +} + +sub vcl_backend_error { + # More comprehensive varnish error page. Display time, instance hostname, host header, url for easier troubleshooting. + set beresp.http.Content-Type = "text/html; charset=utf-8"; + set beresp.http.Retry-After = "5"; + synthetic( {" + + + + "} + beresp.status + " " + beresp.reason + {" + + +

Error "} + beresp.status + " " + beresp.reason + {"

+

"} + beresp.reason + {"

+

Guru Meditation:

+

Time: "} + now + {"

+

Node: "} + server.hostname + {"

+

Host: "} + bereq.http.host + {"

+

URL: "} + bereq.url + {"

+

XID: "} + bereq.xid + {"

+
+

Varnish cache server + + + "} ); + return (deliver); +} + +sub vcl_synth { + + #Redirect using '301 - Permanent Redirect', permanent redirect + if (resp.status == 851) { + set resp.http.Location = req.http.x-redir; + set resp.http.X-Varnish-Redirect = true; + set resp.status = 301; + return (deliver); + } + + #Redirect using '302 - Found', temporary redirect + if (resp.status == 852) { + set resp.http.Location = req.http.x-redir; + set resp.http.X-Varnish-Redirect = true; + set resp.status = 302; + return (deliver); + } + + #Redirect using '307 - Temporary Redirect', !GET&&!HEAD requests, dont change method on redirected requests + if (resp.status == 857) { + set resp.http.Location = req.http.x-redir; + set resp.http.X-Varnish-Redirect = true; + set resp.status = 307; + return (deliver); + } + + #Respond with 403 - Forbidden + if (resp.status == 863) { + set resp.http.X-Varnish-Error = true; + set resp.status = 403; + return (deliver); + } +} +#this is a comment +EOF + + # Restart Varnish + systemctl daemon-reload + service varnish restart + +} > /tmp/setup.log