Find documentation on building a trueDaaS architecture
The tocario trueDaaS solution consists of two components: the Management System and a set of VM hosts. A basic scenario has at least two VM hosts. The Management System runs a Debian Linux and must be set up High Available (HA). This is possible by either using two or more physical servers in a cluster or by running the Management System as a VM on a third-party HA virtualization environment (like HyperV or vSphere) with redundant resources. It is not supported to run the Management System on a trueDaaS VM host.
The Management System is communicating with the VM Hosts using the Internal Network. This network can be a VLAN or VXLAN on the switch infrastructure however it must be connected natively (without any tag) to any VM host and the Management System. The NFS Storage must be accessible from this Internal Network either directly or routed.
For End user access to the Web Portal on the Management System it needs to be connected directly or translated (NAT) to the public Internet. For external access the port 443/tcp (HTTPS) is required to be open from the public Internet. It is recommended to have port 80/tcp opened for convenient HTTP access (forwarding to HTTPS only).
The Management System needs outgoing (to the public Internet) access to the following destinations:
● public DNS server (53/udp and 53/tcp)
● public NTP server (123/udp)
● SMTP mail relay or Mandrill API access
● tocario support and accounting API (tdm.tocario.com on 443/tcp)
Technically, the connectivity between the Management System and the public Internet can be realized using the Internal Network however for security reasons this is not recommended. As every customer gets a dedicated layer 2 network there may be a lot of dedicated Customer Networks (see 1.). The used encapsulation protocol(s) (VLAN and/or VXLAN) is required to be forwarded transparently across all VM hosts. In most cases, a specified VLAN range and/or VXLAN range is configured.
No component between the VM hosts may block any of the configured VLANs or VXLANs. The embedded Customer Router realizes the public Internet access from the dedicated Customer Network. For each Customer Network one dedicated instance of the embedded Customer Router is running. This Customer Router can provide network functionalities like NAT, DHCP-Server, DNS-Server and VPN Site2Site connectivity for every customer. Every Customer Router has a second interface in a Transfer Network. This network is a public Internet network. Public IP addresses on this interface of the Customer Router are required to have built-in functionalities like VPN and D-NAT (port forwarding). Another kind of embedded systems in the Transfer Network are the Connection Proxies. A Connection Proxy is used whenever an End User connects to a Desktop. For that reason, all Connection Proxies require a public Internet address too. A firewall between the Transfer Network segment and the rest of the public Internet must be configured to allow port 443/tcp incoming to all addresses of the Connection Proxies. All embedded Systems are running as virtual machines on the VM hosts. All TLS endpoints (Management System and Connection Proxies) do require valid and public signed x509 certificates.
Management System Hardware or VM requirements:
● Processor: x86 based 64 Bit CPU; 1 Socket
● Memory: 24 GB RAM, ECC registered, NUMA balanced
● Network: redundant (at least 2) GB Ethernet
● Storage: >146GB logical RAID disk; fault tolerant with auto recovery (Level 1, 5, 6); hot spare
● Management (Only if physical cluster): iLo / DRAC / IPMI functionalities and licenses for remote
console and server display (E.g. iLo Advanced license)
● Processor: Intel Xeon Processor (at least generation Westmere)
● Memory: depending on sizing; 32 GB minimum recommended; ECC registered, QPI (NUMA)
● Network: depending on sizing; 4x GB Ethernet or 2x 10GB Ethernet minimum
● Storage: optional local Storage; for SWAP location or differential desktop image writes; if used:
● Management: iLo / DRAC / IPMI functionalities and licenses for remote console and server display
(E.g. iLo Advanced license)
● NFSv3 or v4 compliant (at least 2 exports: live and backup)
● Size: depending on sizing; space for desktops disks + backup storage
● IO capacity: depending on sizing; avg. 15 – 50 IOPS per desktop recommended
NetApp: Integration in NetApp cDot API on SVM for best features/performance. Ontap API must
be accessible by the Management System.
Alternative: Block Storage
If no native NFS Storage is available the tocario Block Storage Connector can be used to connect Block Storages. The Block Storage Connector is a software appliance and must be set up High Available (HA). This is possible by either using two or more physical servers in a cluster or by running the Block Storage Connector as a VM on a third-party HA virtualization environment (like HyperV or vSphere) with redundant resources.
DNS entries and SSL certificates are required for the Portal webserver and the Connection Proxies. The certificates must be valid for any user/client accessing the Portal or a Connection Proxy. Therefore, it is not recommended to use self-signed certificates.
For the Portal webserver, a single or wildcard certificate can be used. E.g. Portal URL / Common Name (CN):
The corresponding DNS entry must point to a (dedicated or shared) public IP address (public Portal IP).
Ports 80/tcp and 443/tcp must be forwarded from this public Portal IP to the Management System IP
(Internal Net) if a shared public IP address is used.
For the Connection Proxies a wildcard certificate is required as the same certificate will be used for any started Connection Proxy. E.g.:
The corresponding DNS entries must point to dedicated public IP addresses within the Transfer Net. The public IP addresses are automatically configured directly to the interfaces of the Connection Proxies.
For remote installation services and support it is required to provide a permanent VPN access to the Internal Network for tocario.