Can I spin up a new Web Gateway from a VM Template? I'd like to have it come up licensed and configured with REST API enabled. When I try to clone an existing MWG the interfaces dont come up properly, I get boot errors and my policy, configuration and license don't load.
Solved! Go to Solution.
The answer is yes, you can. The challenge is that each new clone will come up with either the same UUID and the same MAC addresses and same IPs (if you choose "I moved it") as the template. Or the clones will come up with a new UUID and new MACs (if you choose "I copied it") which won't attach to your existing configuration.
None of what follows is currently officially supported by McAfee, but the techniques are based on this article, written by McAfee, for putting an existing configuration on new hardware which also has a different UUID and different MAC addresses: Web-Gateway-Restoring-a-backup-after-a-Hardware-replacement
Once the images are up, they should be fully supportable provided you haven't done any low level "hacking" of the configuration files via command line or otherwise outside of the GUI.
Cloning an MWG from a template does take a significant amount of time and does not currently support automated clustering. However, you can update engines, dats, policy etc via automation and clever use of scheduled jobs to partially overcome this. Your spin up times will vary based on your environment, but if they are too long to suit your needs, I recommend the fully supported, "semi-elastic", method of having VMs preconfigured and updated, that are clustered, but dormant, and could be powered on as needed much more quickly. Perhaps a hybrid strategy of having one dormant clustered MWG and also the capability of adding more unclustered clones with longer spin up times might be considered.
You may also want to consider logging and reporting for dynamic MWGs. This is not a big issue if you are just turning clustered, preconfigured VMs on and off as the logs will be retained on the dormant VMs. But, if you are creating and destroying VMs on demand then you probably should either use syslog, or automate transferring the logs via forced push from MWG before the cloned system is shutdown.
The process described here will work just fine and should be supported in a private cloud environment where McAfee supports building a VM instance from a customer supplied template. This should also be possible in Azure when running on top of Hyper-V where McAfee should also support customer built templates.
While you could build a custom AMI for use in AWS. McAfee does not presently support spinning up from a custom customer AMI and the McAfee AMIs do not include licensing or more importantly the ability to load a full customer specific backup file and the necessary scripting to attach that configuration to a new UUID and MAC address.
This reply covers how to create a template that can be used to spawn multiple unique clones. The clones will necessarily need to use DHCP to assign addresses for all configured interfaces.
In order to build a template that is capable of being used to spin up multiple unique clones, you need to first create a properly licensed and updated MWG VM.
Configure the VM to use DHCP on all interfaces. Set up your administrator credentials, enable the REST interface and make any other configuration changes that you want for all clones, proxy ports authentication settings, domain membership, UI Cert, etc. Also restore your latest policy including the SSL CA. Do not configure hybrid synch on the template! (Hybrid synch should be from a single, always on, MWG)
1) Set up scheduled jobs
One job on your always on MWG to write a backup to its file server
Two jobs on your template, one to download the backup from the always on MWG and another that is started after the first to restore the downloaded backup.
2) Copy the attached rearm.sh and rc.local files to /opt/pod/
3) Create a backup
4) Rename the backup to pod.backup and store the backup in /opt/pod/
5) Run ./rearm.sh from /opt/pod
You may need to chmod 744 rearm.sh If the coordinator says OK, Continue... hit "Enter" and the rest of the configuration will complete and the system will shutdown and be ready for cloning.
If you get this:
Your pod.backup file is stale! Hit Ctrl-C to exit! Otherwise your rc.local will get messed up when you run ./rearm.sh the second time.
Do not reboot the template after this point unless you need to make a template change or update the appliance software. If you do need to make a template change You need to repeat the last 3 steps before the template will be ready for cloning again. If you are automating, perhaps you might want to consider automating the spinup and rearm of the template nightly, weekly, or monthly.
When cloning the template always choose "I copied it" so each clone has unique UUID and MAC addresses.
Configured management, file server, ssh, and proxy ports should come up if they are configured for 0.0.0.0. However, to bring them up reliably the script ensures that the system will automatically reboot once so that the MWG services all attach properly to the new MACs and DHCP assigned IP addresses. The reboot in the rc.local script does not appear necessary (could be replaced with service mwg restart) if the file server ports are not needed. If the script is changed to just restart services and the file server ports don't come up, an easy fix is to disable, save, reenable, save, or just reboot.
This solution has not been extensively tested, so I encourage you to test your specific implementation and configuration thoroughly. I hope you find this post useful. If so, please Kudo so others can find more easily.
Questions, suggestions, comments and improvements are always welcome.
Another option, if you have a cloud license is to take advantage of the elasticity provided by Skyhigh cloud. A single always on VM can offload Web filtering to the cloud service via Next Hop Proxy. The policy to use could be managed via SSE (formerly called UCE) or the on premise proxy. This article describes next hop proxy setup with UCE/SSE managed policy. Next Hop Proxy from On Prem to Cloud A single VM can handle a large amount of traffic if its only function is authentication and next hop proxy. Also it would rarely need an appliance upgrade, wouldn't need engine updates, and would also rarely need any policy updates unless filtering policy was being managed from the MWG rather than the cloud.
The answer is yes, you can. The challenge is that each new clone will come up with either the same UUID and the same MAC addresses and same IPs (if you choose "I moved it") as the template. Or the clones will come up with a new UUID and new MACs (if you choose "I copied it") which won't attach to your existing configuration.
None of what follows is currently officially supported by McAfee, but the techniques are based on this article, written by McAfee, for putting an existing configuration on new hardware which also has a different UUID and different MAC addresses: Web-Gateway-Restoring-a-backup-after-a-Hardware-replacement
Once the images are up, they should be fully supportable provided you haven't done any low level "hacking" of the configuration files via command line or otherwise outside of the GUI.
Cloning an MWG from a template does take a significant amount of time and does not currently support automated clustering. However, you can update engines, dats, policy etc via automation and clever use of scheduled jobs to partially overcome this. Your spin up times will vary based on your environment, but if they are too long to suit your needs, I recommend the fully supported, "semi-elastic", method of having VMs preconfigured and updated, that are clustered, but dormant, and could be powered on as needed much more quickly. Perhaps a hybrid strategy of having one dormant clustered MWG and also the capability of adding more unclustered clones with longer spin up times might be considered.
You may also want to consider logging and reporting for dynamic MWGs. This is not a big issue if you are just turning clustered, preconfigured VMs on and off as the logs will be retained on the dormant VMs. But, if you are creating and destroying VMs on demand then you probably should either use syslog, or automate transferring the logs via forced push from MWG before the cloned system is shutdown.
The process described here will work just fine and should be supported in a private cloud environment where McAfee supports building a VM instance from a customer supplied template. This should also be possible in Azure when running on top of Hyper-V where McAfee should also support customer built templates.
While you could build a custom AMI for use in AWS. McAfee does not presently support spinning up from a custom customer AMI and the McAfee AMIs do not include licensing or more importantly the ability to load a full customer specific backup file and the necessary scripting to attach that configuration to a new UUID and MAC address.
This reply covers how to create a template that can be used to spawn multiple unique clones. The clones will necessarily need to use DHCP to assign addresses for all configured interfaces.
In order to build a template that is capable of being used to spin up multiple unique clones, you need to first create a properly licensed and updated MWG VM.
Configure the VM to use DHCP on all interfaces. Set up your administrator credentials, enable the REST interface and make any other configuration changes that you want for all clones, proxy ports authentication settings, domain membership, UI Cert, etc. Also restore your latest policy including the SSL CA. Do not configure hybrid synch on the template! (Hybrid synch should be from a single, always on, MWG)
1) Set up scheduled jobs
One job on your always on MWG to write a backup to its file server
Two jobs on your template, one to download the backup from the always on MWG and another that is started after the first to restore the downloaded backup.
2) Copy the attached rearm.sh and rc.local files to /opt/pod/
3) Create a backup
4) Rename the backup to pod.backup and store the backup in /opt/pod/
5) Run ./rearm.sh from /opt/pod
You may need to chmod 744 rearm.sh If the coordinator says OK, Continue... hit "Enter" and the rest of the configuration will complete and the system will shutdown and be ready for cloning.
If you get this:
Your pod.backup file is stale! Hit Ctrl-C to exit! Otherwise your rc.local will get messed up when you run ./rearm.sh the second time.
Do not reboot the template after this point unless you need to make a template change or update the appliance software. If you do need to make a template change You need to repeat the last 3 steps before the template will be ready for cloning again. If you are automating, perhaps you might want to consider automating the spinup and rearm of the template nightly, weekly, or monthly.
When cloning the template always choose "I copied it" so each clone has unique UUID and MAC addresses.
Configured management, file server, ssh, and proxy ports should come up if they are configured for 0.0.0.0. However, to bring them up reliably the script ensures that the system will automatically reboot once so that the MWG services all attach properly to the new MACs and DHCP assigned IP addresses. The reboot in the rc.local script does not appear necessary (could be replaced with service mwg restart) if the file server ports are not needed. If the script is changed to just restart services and the file server ports don't come up, an easy fix is to disable, save, reenable, save, or just reboot.
This solution has not been extensively tested, so I encourage you to test your specific implementation and configuration thoroughly. I hope you find this post useful. If so, please Kudo so others can find more easily.
Questions, suggestions, comments and improvements are always welcome.
Another option, if you have a cloud license is to take advantage of the elasticity provided by Skyhigh cloud. A single always on VM can offload Web filtering to the cloud service via Next Hop Proxy. The policy to use could be managed via SSE (formerly called UCE) or the on premise proxy. This article describes next hop proxy setup with UCE/SSE managed policy. Next Hop Proxy from On Prem to Cloud A single VM can handle a large amount of traffic if its only function is authentication and next hop proxy. Also it would rarely need an appliance upgrade, wouldn't need engine updates, and would also rarely need any policy updates unless filtering policy was being managed from the MWG rather than the cloud.
New to the forums or need help finding your way around the forums? There's a whole hub of community resources to help you.
Thousands of customers use our Community for peer-to-peer and expert product support. Enjoy these benefits with a free membership: