1 x TIE 184.108.40.206 > We will install a NEW one with the TIE 4.X ad then do the cluster move to migrate (we understood)
2 x DXL 220.127.116.111 DXL broker (Update of MLOS Agent 18.104.22.1682 .done to latest on DXL )
I just wanted to Update the DXL Brokers as mentioned under pre. req. only to discover that an Update with on-Premise EPO is NOT possible and the DXL broker also have to be installed new.
Is that correct?
Just to be sure before we do this and end up restoring will ANYTHING change except the BOOT Logon from mcafee to Trellix?
What do we have to UPDATE First?
THE two DXL Broker and then the TIE?
Thank you for any help.
MPORTANT: This release does not support in-place upgrade using Trellix ePO - On-prem Product deployment for Trellix DXL Broker appliance server version 5.x.x / 6.0.0 to 6.0.3 deployed using DXL Broker Server ISO or OVA and TIE Server ISO or OVA. In this case, you can only upgrade the Trellix DXL Broker Server by setting up a new Trellix DXL Broker server on your choice of supported platform or TIE Server 4.0.0.
Solved! Go to Solution.
For anyone else reviewing this thread that may find this useful. This situation appeared to be somewhat unique and was caused by some form of corruption or invalid state of the existing TIE CA within EPO.
This was made apparrent by the review of the following error located in the tieserver.log on the device in question:
Error generating keyStore or while setting the keyStore entry
java.security.KeyStoreException: Key protection algorithm not found: java.security.KeyStoreException: Certificate chain is not valid
We were able to resolve this by following the following article to rebuild the TIE CA within ePO:
Once completed the TIE server was able to complete its certificate requests with ePO and move on with the rest of the upgrade process.
The DXL thingy get's a little bit complicated the last few days to explain to the Network people or FW people. YOu have your own Network they say....
We just spent so much time with the Mcafee > Trellix rebrand an all customers and are still unsure what will be updated here. The TRELLIX Logo at the end and we spend 2 days of work just at this site for the TIE Server?
I tried the way to Install the NEW TIE 4.X Parallel to old TIE3.Xand leave the DXL Brokers at 6.0.X.
All seemed to work fine until end of real boot Appliance Startup of TIE4 where we see:
It says and stays forever at:
* Not Connected to any Broker (RED)
* Waiting for TIE Server handshake ***** (This keeps lopping we waited 2 hrs)
What we see:
* DXL Network seemed fine
* Primary was running fine > It mounted the 4.X auto as secondary UNDER primary
* ALL DXL points appear and green under FXL Fabric
* NEW TIE in EPO and TIESERVER TAG was there
* TIE Query was working in EPO
* You could see clients connected to new and old TIE (Mainly DXL traffic is guess not TIE)
* Under Servers TIE SERVER Topology you the new one under the old and as secondary
At the end i did see the pirmary/Secondary reset KB and gave up.
What was also TRIED at last steps:
* CHMOD 777 as seen on forum on old TIE and reboot
Last after 2 hrs we rebooted the second TIE and did see screemshot Number 3
DId a fallback to snapshot of all components.
If you are actively encountering a product/upgrade issue please contact support directly. The turn around time on the forums is likely to increase delay and if I am reading your notes correctly, you have rolled back and we have lost an opportunity to root cause and potentially move you forward with the upgrade.
The order of DXL and TIE upgrades in cases where you run TIE and DXL on separate servers is optional. In cases where you intend to install a broker on the TIE server, it is recommended to upgrade TIE first and use the built in broker that the image comes with. This includes the 6.0.3 broker. For any brokers you may have running on Windows\RHEL\CentOS\etc, you can perform in place upgrades.
Both releases are more than rebrands including an underlying OS upgrade form MLOS2 to MLOS3.
Release notes for both products are available here:
I appreciate the feedback from your experience and will keep my eyes out for other examples of this happening, but if you require root cause you should engage support through a service request. We will likely need to review logs to understand where your DXL connection issue resides.
Thank you so much Brian for writting direct here so others can profit.
It look like we made all as we should and the order to update should has no effect. (BROKER or TIE fiest). All our DXl/BROKER are MLOS Derivat based non Windows or CENTOS etc.
Yes i understand the point with support and the status we had but it was too late in the evening to process further with ticket and we had snapshots with VM's turrned off of all 5 Machines.
We will start a second attempt early to make sure we can open a ticket than timely.
To the screenshots how long should we wait for the sync of the DB? Our postgre DUMP in 2.4GB in size as SQL file.
The things we see in RED are those OK during the Migration process (Not the Virtual Disk...)?
Greetings from switzerland
No was thinking about doing another try this week with early enough time so we can open a case.
I from my view still asume it's because the DXL Broker was on an older version.
Glad for any input.
Greetings from Switzerland
in our case all our brokers and extensions are already in the latest version.
Before production environment we tested it into our lab environment and, in that situation, TIE 4 was correctly upgraded following the described procedure.
My case is open since a few minutes and i added this discussion url as a comment to tell them that is seems a possible identical issue is present too on another customer / partner architecture.
first contact with support has already been established
@SWISS if you open a case gain time and upload directly those files
/var/Trellix/tieserver/policy/dxl.policy and tieserver-lib.log
it has been requested on my side. Support is currently analyzing them.
Second TRY Same error and we found out that there is a PASSWORD MAXIMUX limit for a security Apppliance (Well done guys!!!)
THIS TRY/THIS TIME ONLY: We did NOT update the TIE Extension from 3.X > 4.X prio because at that point the TIE function stops until end of migration and we have to go back with full enviroment worst case. (In first try Jan we did as first Update the TIE Extension as in Manual.
So we finally after doing it new has access to the transition-status.sh
I can only guess that invalid port number "TIE" in PSQL is a bug?
When we skip the wait and reboot we still see the Error after reboot. I did install with OV template not from ISO. This looks for me somehow OV template related? He has a job running and thus until solved can't mount the other disks?
New to the forums or need help finding your way around the forums? There's a whole hub of community resources to help you.
Thousands of customers use our Community for peer-to-peer and expert product support. Enjoy these benefits with a free membership: