Difference between revisions of "NICE:Landing page"
| Line 154: | Line 154: | ||
On Linux, the host equipped with 3D accelerated video adapters must run an accelerated X Server configured according to these requirements: | On Linux, the host equipped with 3D accelerated video adapters must run an accelerated X Server configured according to these requirements: | ||
* NVIDIA drivers must be correctly working on this display. | * NVIDIA drivers must be correctly working on this display. | ||
| − | * in case of multiple GPUs on the same node the suggested configuration is a single X Server for | + | * in case of multiple GPUs on the same node the suggested configuration is a single X Server for all GPUs, one Screen for each GPU. Example: :0.0 for the first accelerated GPU, :0.1 for the second accelerated GPU.. |
| − | all GPUs, one Screen for each GPU. Example: :0.0 for the first accelerated GPU, :0.1 for the second accelerated GPU.. | + | * local UNIX connections to the 3D accelerated X Server displays must be granted to the users of the 3D applications. DCV libraries run inside the 3D applications launched by the users. They redirect 3D calls to the accelerated display and so the applications processes run by the users must be able to access it. |
| − | * local UNIX connections to the 3D accelerated X Server displays must be granted to the users of the 3D applications. | ||
| − | DCV libraries run inside the 3D applications launched by the users. They redirect 3D calls to the accelerated display and so the applications processes run by the users must be able to access it. | ||
When running dcvadmin enable, DCV automatically searches for well-known X or display manager startup scripts and adds a call to /opt/nice/dcv/bin/dcvxgrantaccess. By default dcvxgrantaccess executes xhost +local:. So when X is launched the display | When running dcvadmin enable, DCV automatically searches for well-known X or display manager startup scripts and adds a call to /opt/nice/dcv/bin/dcvxgrantaccess. By default dcvxgrantaccess executes xhost +local:. So when X is launched the display | ||
access is granted. It is possible to change this behaviour, for example to restrict the access to a subset of users, | access is granted. It is possible to change this behaviour, for example to restrict the access to a subset of users, | ||
Revision as of 13:36, 22 March 2017

DCV
DCV enhances the graphics functions of 3D applications on Linux and Microsoft Windows on both OpenGL and DirectX to display complex visual data across multiple, simultaneous distributed displays using low-bandwidth networks. NICE DCV is the remote 3D visualization technology that enables Technical Computing users to connect to OpenGL or DirectX applications running in a data center.
Using NICE DCV, you can remotely work on 3D interactive applications, fully accelerated by highend GPUs on workstations, blades or servers. No matter if you are accessing high-end OpenGL modeling applications or simple viewers, NICE DCV lets you connect quickly and securely from anywhere and experience high frame rates, even with low bandwidth standard Internet connections.
The product supports both Microsoft and Linux systems, enabling collaborative capabilities in heterogeneous environments. Moreover, it is perfectly integrated into NICE EnginFrame, leveraging 2D/3D capabilities over the Web, including the ability to share a session with other users for collaborative or support purposes.
Features
- Collaboration
Support for multiple, collaborative endstations. The set of endstations can be dynamic, with connections being made and others being dropped throughout the DCV session
- H.264-based encoding
Greatly reduces bandwidth consumption
- Exploits the latest NVIDIA Grid SDK technologies
Improves performances and reduces system load. Uses the NVIDIA H.264 hardware encoder (on Kepler and GRID cards)
- Full Desktop Remotization
Uses the high-performance NICE DCV protocol for the remotization of the full desktop (not only for the 3D windows as in previous versions)
- Support for NVIDIA vGPU technology
Simplifies the deployment of Windows VMs with full application support
- High Quality Updates
Support for high quality updates when network and processor conditions allow
- Image Only Transport
Transmission of final rendered image, rather than geometry and scene information, providing insulation and protection of proprietary customer information
- User Selectable Compression Levels
Ability to specify the compression level used to send the final image over the wire to the endstation
- Pluggable Compression Capability)
Pluggable compression/decompression (codec) framework, eventually allowing the image compression/decompression algorithms to be replaced
- Smart-card remotization
Seamlessly access the local smart card, using the standard PC/SC interface. Use smart cards for encrypting emails, signing documents and authenticating against remote systems
- Adaptive server-side resolution
Automatically adapt the server-side screen resolution to the size of the viewer window
- USB remotization (Preview)
Plug USB devices on the client side and use them on the remote desktop
Typical Deployment
DCV uses an application host to run OpenGL or DirectX graphics applications and transmits the output to one or more end stations that connect to the application host over the network.
The application host sends updates (in the form of pixel data) to each connected end station. End stations send user events (such as mouse or keyboard actions) to the host. Each end station is responsible for: • displaying one or more application windows running on the host machine; • sending user interaction with an application host for processing.
Modes of Operation
| Technology | Hypervisors | Application Compatability | OS Support | |
|---|---|---|---|---|
| Bare Metal or GPU Pass-through | All | Maximum | Linux and Windows | Pros:
Limitations:
|
| NICE External Rendering Server | All | Limited | Windows | Pros:
Limitations:
|
| NVIDIA vGPU | XenServer 6.2 SP1 | Excellent | Windows | Pros:
Limitations:
|
NICE External Rendering Server
DCV application hosts can optionally be configured to delegate the actual 3D rendering to a separate host, the rendering host. In this case the OpenGL application runs on an application host which does not provide 3D hardware acceleration and delegates the OpenGL rendering to a rendering host equipped with one or more 3D accelerated graphic adapters.
This configuration enables virtual machines to act as application hosts even if the virtual hardware emulated by the hypervisor does not provide OpenGL acceleration.
The rendering host:
- receives OpenGL commands from the applications running on the application servers;
- sends the 3D image updates (in the form of pixel data) to each connected end station.
Installation Prerequisites
Basically DCV requires the following hardware systems :
- a physical application host machine equipped with 3D accelerated video adapters and capable of running the OpenGL or DirectX applications; or a virtual application host machine equipped with 3D accelerated video adapters using GPU pass-through and capable of running the OpenGL or DirectX applications;
- one or more remote machines (end stations), each of which is connected to an output display device;
- a network connection (WAN/LAN) between the application host and the end station.
For the specific requirements for each mode of operation, please refer to the manual
End Station Requirements
Operating system: Linux
- Red Hat Enterprise Linux 5.x, 6.x, 7.x 32/64-bit
- SUSE Enterprise Server 11 SP2 32/64-bit
Windows
- Microsoft Windows 7 32/64-bit
- Microsoft Windows 8, 8.1 32/64-bit
- Microsoft Windows 10 32/64-bit
Mac OS X
- Mac OS X Snow Leopard (10.6), Lion (10.7), Mountain Lion (10.8), Mavericks (10.9), Yosemite (10.10)
The DCV installer includes and automatically installs RealVNC Visualization Edition 4.6.x (Viewer).
The application host and its end stations can run different operating systems. Different versions of DCV Server and DCV End Station are compatible but some features may not be available when not using the latest version. DCV is compatible with many plain-VNC clients from third parties. When using such clients, 3D images are delivered to the clients using the standard VNC protocol and therefore with reduced perfomance.
Note: For each operation mode, there are probably additional requirements for the server as well as the end station. Please refer to the documentation to find them out.
X Server Requirements
On Linux, the host equipped with 3D accelerated video adapters must run an accelerated X Server configured according to these requirements:
- NVIDIA drivers must be correctly working on this display.
- in case of multiple GPUs on the same node the suggested configuration is a single X Server for all GPUs, one Screen for each GPU. Example: :0.0 for the first accelerated GPU, :0.1 for the second accelerated GPU..
- local UNIX connections to the 3D accelerated X Server displays must be granted to the users of the 3D applications. DCV libraries run inside the 3D applications launched by the users. They redirect 3D calls to the accelerated display and so the applications processes run by the users must be able to access it.
When running dcvadmin enable, DCV automatically searches for well-known X or display manager startup scripts and adds a call to /opt/nice/dcv/bin/dcvxgrantaccess. By default dcvxgrantaccess executes xhost +local:. So when X is launched the display access is granted. It is possible to change this behaviour, for example to restrict the access to a subset of users, providing a custom implementation of dcvxgrantaccess.
- color depth must be 24-bit
- only when using Native Display mode on Linux with VNC in Service mode additional configuration is required to enable the vnc.so module.
Operation Guides
Component Guides
Notes
- notes as i've installed the system
- Accessing the nodes via ssh
# access the nodes, do this from the boot node:
sudo su -
(password)
dev ssh
ssh <IP> # ip from the dashboard, access the private address of the nodes- Reinstall a system without a full reboot
# on the boot node and as root
# edit the conf file
vi /mnt/flash/conf/pentos.conf.used
# then :
dev reinit- Check status via CLI (if you can access the web interface)
# on boot node
[root@boot-172-16-0-2 ~]# piston-dev.py cluster-info
{u'control': {u'state': u'initialize:wait-for-nodes'},
u'hosts': {u'172.16.1.3': {u'blessed': False,
u'context': {},
u'diskdata': None,
# or use the short version:
[root@boot-172-16-0-2 ~]# piston-dev.py cluster-info -s
{u'control': {u'state': u'optimal'},
u'hosts': {u'172.16.1.2': {u'host_ip': u'172.16.0.13',
u'progress': [],
u'status': u'ready'},
u'172.16.1.3': {u'host_ip': u'172.16.0.14',
u'progress': [],
u'status': u'ready'},
u'172.16.1.4': {u'host_ip': u'172.16.0.15',
u'progress': [],
u'status': u'ready'},
u'172.16.1.5': {u'host_ip': None,
u'progress': [],
u'status': u'stop'},
u'172.16.1.6': {u'host_ip': None,
u'progress': [],
u'status': u'booting'},
u'172.16.1.7': {u'host_ip': None,
u'progress': [],
u'status': u'booting'}}}- Force reinstall
# create the file destroy-data on the USB root
# or on the boot node:
touch /mnt/usb1/destroy-data- Problems with IPMI (timeouts, commands should complete in <2 seconds)
# this is the command piston will use on the IPMI module - test out on the boot node to diagnose IPMI issues
[root@boot-172-16-0-2 log]# ipmi-chassis --session-timeout 1999 --retransmission-timeout 1000 -u admin -p admin -D LAN_2_0 -h 172.16.1.5 --get-status
ipmi_ctx_open_outofband_2_0: session timeout
# above is bad! below is what you want to see
[root@boot-172-16-0-2 log]# ipmi-chassis -u admin -p admin -D LAN_2_0 -h 172.16.1.5 --get-status
System Power : off
Power overload : false
Interlock : inactive
Power fault : false
Power control fault : false
Power restore policy : Always off
Last Power Event : power on via ipmi c
### If you are seeing IPMI timeouts, its probably because eth0/eth1 trying bonding, see the MACs are the same below (we've only one cable plugged in in this instance)
eth0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:25:90:4b:7a:85 txqueuelen 1000 (Ethernet)
eth1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:25:90:4b:7a:85 txqueuelen 1000 (Ethernet)
[root@boot-172-16-0-2 log]# ifconfig eth1 down
[root@boot-172-16-0-2 log]# ipmi-chassis --session-timeout 1999 --retransmission-timeout 1000 -u admin -p admin -D LAN_2_0 -h 172.16.1.5 --get-status
System Power : off
Power overload : false
Interlock : inactive
Power fault : false
Power control fault : false
Power restore policy : Always off
Last Power Event : power on via ipmi command
Chassis intrusion : inactive
Front panel lockout : inactive
Drive Fault : false
Cooling/fan fault : false
Chassis Identify state : off
Power off button : enabled
Reset button : enabled
Diagnostic Interrupt button : enabled
Standby button : enabled
Power off button disable : unallowed
Reset button disable : unallowed
Diagnostic interrupt button disable : unallowed
Standby button disable : unallowed- Monitoring the install from the boot node
tail -f /var/log/cmessages | egrep -i '(error|trace|critical|fail)'
# Good command to check and make sure the IPMI dhcp ok and then pass the status check:
tail -f /var/log/cmessages | egrep -i '(dhcp|ipmi)'- working cloud.conf for piston 3.5: (please ref to switch conf also: http://wiki.bostonlabs.co.uk/w/index.php/Supermicro:_Setting_up_VLANs_and_Routing_within_an_SSE-X24_Switch )
- NOTE: The default password is still the default hash: $6$2EFLpDNp$Example.reAyhjN90s.qORBBABvA0CExsiVcrKgZwz5uOwlLW7rRrCZJXjA5dQfHlA7L11c2n37nhcRav0aaa1
- Login for the first time with admin/hash above
# removed all the comment lines:
[root@boot-172-16-0-2 usb1]# grep "^[^#;]" cloud.conf
[role profile BootNodeAdmin]
management api enabled = false
boot node enabled = true
[role profile ClusterAdmin]
management api enabled = true
[user profile admin]
role = BootNodeAdmin, ClusterAdmin
secret = $6$Mjc/QBXGf2Y$/l9f2jVxbeKkkk5KiyPxMD4k0MQggVLhZvjLI9NWD1CO2Fwzs1dyDsyKJ7RewfSG9nBipLMO0ySq7IlTvC5C2.
[user profile piston-admin]
role = ClusterAdmin
secret = $6$rounds=60000$Example.Jkvnr3vC$wRiggCNQhj/qthYCLqFTFPOs2eil.0DsAe8qGw.UyQEejk9u6qk/hhWdwrYFIdArbmY4RGxVw7
[network]
host_net=172.16.0.0/24
host_bootnode_ip=172.16.0.2
management_net=172.16.1.0/24
services_net=172.16.2.0/26
services_vlan=2
cloud_net=172.16.3.0/24
cloud_vlan=3
public_net=172.16.4.0/24
public_vlan=4
ntp_servers=pool.ntp.org
dns_servers=8.8.8.8,8.8.4.4
type=nova-network
[disk profile ceph]
count_min=1
size_min=100GB
ssd=always
partitions=ceph_journal,ceph_journal,ceph_monitor,identity,ceph_data
priority=1
[disk profile ephemeral]
count_min=1
size_min=500GB
ssd=never
partitions=identity,ceph_data,ephemeral:500GB
priority=2
[auth]
type=local
[local_auth]
admin_username=admin
admin_password=$6$2EFLpDNp$Example.reAyhjN90s.qORBBABvA0CExsiVcrKgZwz5uOwlLW7rRrCZJXjA5dQfHlA7L11c2n37nhcRav0aaa1
[ldap_auth]
url=ldap://ldap.example.com
user=CN=ldapadmin,CN=Users,DC=example,DC=com
password=BadPassword
suffix=DC=example,DC=com
tenant_tree_dn=OU=Piston,DC=example,DC=com
tenant_objectclass=organizationalUnit
tenant_id_attribute=ou
tenant_name_attribute=displayName
user_tree_dn=CN=Users,DC=example,DC=com
user_objectclass=person
user_id_attribute=cn
user_name_attribute=cn
user_attribute_ignore=password,tenant_id,tenants
user_enabled_attribute=userAccountControl
user_enabled_default=512
user_enabled_mask=2
role_tree_dn=OU=Piston,DC=example,DC=com
role_objectclass=group
role_id_attribute=cn
role_name_attribute=displayName
role_member_attribute=member
[snmp]
enabled=no
community=piston
[dashboard]
[servers]
server_count=5
image_cache_size=204800
ipmi_user=admin
ipmi_pass=adminJon's cloud.conf file:
[role profile BootNodeAdmin]
management api enabled = false
boot node enabled = true
[role profile ClusterAdmin]
management api enabled = true
[user profile admin]
role = BootNodeAdmin, ClusterAdmin
secret = $6$twnChnKbtTmi8$oCVMXrbVivv7U.Ev4bXe3VvWg8o1lNQwdfQxFbZkE/cqJeB7dtGwWjrnrK5VFlkOgHvVq4gdZqUYEgRmrIVga.
[user profile piston-admin]
role = ClusterAdmin
secret = $6$twnChnKbtTmi8$oCVMXrbVivv7U.Ev4bXe3VvWg8o1lNQwdfQxFbZkE/cqJeB7dtGwWjrnrK5VFlkOgHvVq4gdZqUYEgRmrIVga.
[network]
host_net=172.16.0.0/24
host_bootnode_ip=172.16.0.2
management_net=172.16.1.0/24
services_net=172.16.2.0/26
services_vlan=2
cloud_net=172.16.3.0/24
cloud_vlan=3
public_net=172.16.4.0/24
public_vlan=4
ntp_servers=pool.ntp.org
dns_servers=8.8.8.8,8.8.4.4
type=nova-network
[disk profile ceph]
count_min=1
size_min=100GB
ssd=always
partitions=ceph_journal,ceph_journal,ceph_monitor,identity,ceph_data
priority=1
[disk profile ephemeral]
count_min=1
size_min=500GB
ssd=never
partitions=identity,ceph_data,ephemeral:500GB
priority=2
[auth]
type=local
[local_auth]
admin_username=admin
admin_password=$6$twnChnKbtTmi8$oCVMXrbVivv7U.Ev4bXe3VvWg8o1lNQwdfQxFbZkE/cqJeB7dtGwWjrnrK5VFlkOgHvVq4gdZqUYEgRmrIVga.
[ldap_auth]
url=ldap://ldap.example.com
user=CN=ldapadmin,CN=Users,DC=example,DC=com
password=BadPassword
suffix=DC=example,DC=com
tenant_tree_dn=OU=Piston,DC=example,DC=com
tenant_objectclass=organizationalUnit
tenant_id_attribute=ou
tenant_name_attribute=displayName
user_tree_dn=CN=Users,DC=example,DC=com
user_objectclass=person
user_id_attribute=cn
user_name_attribute=cn
user_attribute_ignore=password,tenant_id,tenants
user_enabled_attribute=userAccountControl
user_enabled_default=512
user_enabled_mask=2
role_tree_dn=OU=Piston,DC=example,DC=com
role_objectclass=group
role_id_attribute=cn
role_name_attribute=displayName
role_member_attribute=member
[snmp]
enabled=no
community=piston
[dashboard]
[servers]
server_count=3
image_cache_size=204800
ipmi_user=admin
ipmi_pass=admin