Difference between revisions of "NICE:Landing page"
| Line 97: | Line 97: | ||
! style="background: white"| Limited | ! style="background: white"| Limited | ||
! style="background: white"| Windows | ! style="background: white"| Windows | ||
| − | ! style="background: white"| Pros:<br/> | + | ! style="background: white"| halign="left" Pros:<br/> |
* Best Consolidation <br/> | * Best Consolidation <br/> | ||
* GPU sharing <br/> | * GPU sharing <br/> | ||
Revision as of 11:10, 22 March 2017

DCV
DCV enhances the graphics functions of 3D applications on Linux and Microsoft Windows on both OpenGL and DirectX to display complex visual data across multiple, simultaneous distributed displays using low-bandwidth networks. NICE DCV is the remote 3D visualization technology that enables Technical Computing users to connect to OpenGL or DirectX applications running in a data center.
Using NICE DCV, you can remotely work on 3D interactive applications, fully accelerated by highend GPUs on workstations, blades or servers. No matter if you are accessing high-end OpenGL modeling applications or simple viewers, NICE DCV lets you connect quickly and securely from anywhere and experience high frame rates, even with low bandwidth standard Internet connections.
The product supports both Microsoft and Linux systems, enabling collaborative capabilities in heterogeneous environments. Moreover, it is perfectly integrated into NICE EnginFrame, leveraging 2D/3D capabilities over the Web, including the ability to share a session with other users for collaborative or support purposes.
Features
- Collaboration
Support for multiple, collaborative endstations. The set of endstations can be dynamic, with connections being made and others being dropped throughout the DCV session
- H.264-based encoding
Greatly reduces bandwidth consumption
- Exploits the latest NVIDIA Grid SDK technologies
Improves performances and reduces system load. Uses the NVIDIA H.264 hardware encoder (on Kepler and GRID cards)
- Full Desktop Remotization
Uses the high-performance NICE DCV protocol for the remotization of the full desktop (not only for the 3D windows as in previous versions)
- Support for NVIDIA vGPU technology
Simplifies the deployment of Windows VMs with full application support
- High Quality Updates
Support for high quality updates when network and processor conditions allow
- Image Only Transport
Transmission of final rendered image, rather than geometry and scene information, providing insulation and protection of proprietary customer information
- User Selectable Compression Levels
Ability to specify the compression level used to send the final image over the wire to the endstation
- Pluggable Compression Capability)
Pluggable compression/decompression (codec) framework, eventually allowing the image compression/decompression algorithms to be replaced
- Smart-card remotization
Seamlessly access the local smart card, using the standard PC/SC interface. Use smart cards for encrypting emails, signing documents and authenticating against remote systems
- Adaptive server-side resolution
Automatically adapt the server-side screen resolution to the size of the viewer window
- USB remotization (Preview)
Plug USB devices on the client side and use them on the remote desktop
Typical Deployment
DCV uses an application host to run OpenGL or DirectX graphics applications and transmits the output to one or more end stations that connect to the application host over the network.
The application host sends updates (in the form of pixel data) to each connected end station. End stations send user events (such as mouse or keyboard actions) to the host. Each end station is responsible for: • displaying one or more application windows running on the host machine; • sending user interaction with an application host for processing.
Modes of Operation
| Technology | Hypervisors | Application Compatability | OS Support | |
|---|---|---|---|---|
| Bare Metal or GPU Pass-through | All | Maximum | Linux and Windows | Pros: Best performance Limitations: One VM per GPU |
| NICE External Rendering Server | All | Limited | Windows | halign="left" Pros:
Limitations:
|
| NVIDIA vGPU | XenServer 6.2 SP1 | Excellent | Windows | Pros: Good performance GPU sharing Limitations: Requires NVIDIA GRID cards and a specific hypervisor |
Installation
- Before You begin
- Understanding the Piston Installation Process
- Networking Setup
- Disks and Disk-Management
- Users Roles and Authentication
- DCV: Installation and Configuration
- Update Piston
- Advanced Installation Options
Operation Guides
Component Guides
Notes
- notes as i've installed the system
- Accessing the nodes via ssh
# access the nodes, do this from the boot node:
sudo su -
(password)
dev ssh
ssh <IP> # ip from the dashboard, access the private address of the nodes- Reinstall a system without a full reboot
# on the boot node and as root
# edit the conf file
vi /mnt/flash/conf/pentos.conf.used
# then :
dev reinit- Check status via CLI (if you can access the web interface)
# on boot node
[root@boot-172-16-0-2 ~]# piston-dev.py cluster-info
{u'control': {u'state': u'initialize:wait-for-nodes'},
u'hosts': {u'172.16.1.3': {u'blessed': False,
u'context': {},
u'diskdata': None,
# or use the short version:
[root@boot-172-16-0-2 ~]# piston-dev.py cluster-info -s
{u'control': {u'state': u'optimal'},
u'hosts': {u'172.16.1.2': {u'host_ip': u'172.16.0.13',
u'progress': [],
u'status': u'ready'},
u'172.16.1.3': {u'host_ip': u'172.16.0.14',
u'progress': [],
u'status': u'ready'},
u'172.16.1.4': {u'host_ip': u'172.16.0.15',
u'progress': [],
u'status': u'ready'},
u'172.16.1.5': {u'host_ip': None,
u'progress': [],
u'status': u'stop'},
u'172.16.1.6': {u'host_ip': None,
u'progress': [],
u'status': u'booting'},
u'172.16.1.7': {u'host_ip': None,
u'progress': [],
u'status': u'booting'}}}- Force reinstall
# create the file destroy-data on the USB root
# or on the boot node:
touch /mnt/usb1/destroy-data- Problems with IPMI (timeouts, commands should complete in <2 seconds)
# this is the command piston will use on the IPMI module - test out on the boot node to diagnose IPMI issues
[root@boot-172-16-0-2 log]# ipmi-chassis --session-timeout 1999 --retransmission-timeout 1000 -u admin -p admin -D LAN_2_0 -h 172.16.1.5 --get-status
ipmi_ctx_open_outofband_2_0: session timeout
# above is bad! below is what you want to see
[root@boot-172-16-0-2 log]# ipmi-chassis -u admin -p admin -D LAN_2_0 -h 172.16.1.5 --get-status
System Power : off
Power overload : false
Interlock : inactive
Power fault : false
Power control fault : false
Power restore policy : Always off
Last Power Event : power on via ipmi c
### If you are seeing IPMI timeouts, its probably because eth0/eth1 trying bonding, see the MACs are the same below (we've only one cable plugged in in this instance)
eth0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:25:90:4b:7a:85 txqueuelen 1000 (Ethernet)
eth1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:25:90:4b:7a:85 txqueuelen 1000 (Ethernet)
[root@boot-172-16-0-2 log]# ifconfig eth1 down
[root@boot-172-16-0-2 log]# ipmi-chassis --session-timeout 1999 --retransmission-timeout 1000 -u admin -p admin -D LAN_2_0 -h 172.16.1.5 --get-status
System Power : off
Power overload : false
Interlock : inactive
Power fault : false
Power control fault : false
Power restore policy : Always off
Last Power Event : power on via ipmi command
Chassis intrusion : inactive
Front panel lockout : inactive
Drive Fault : false
Cooling/fan fault : false
Chassis Identify state : off
Power off button : enabled
Reset button : enabled
Diagnostic Interrupt button : enabled
Standby button : enabled
Power off button disable : unallowed
Reset button disable : unallowed
Diagnostic interrupt button disable : unallowed
Standby button disable : unallowed- Monitoring the install from the boot node
tail -f /var/log/cmessages | egrep -i '(error|trace|critical|fail)'
# Good command to check and make sure the IPMI dhcp ok and then pass the status check:
tail -f /var/log/cmessages | egrep -i '(dhcp|ipmi)'- working cloud.conf for piston 3.5: (please ref to switch conf also: http://wiki.bostonlabs.co.uk/w/index.php/Supermicro:_Setting_up_VLANs_and_Routing_within_an_SSE-X24_Switch )
- NOTE: The default password is still the default hash: $6$2EFLpDNp$Example.reAyhjN90s.qORBBABvA0CExsiVcrKgZwz5uOwlLW7rRrCZJXjA5dQfHlA7L11c2n37nhcRav0aaa1
- Login for the first time with admin/hash above
# removed all the comment lines:
[root@boot-172-16-0-2 usb1]# grep "^[^#;]" cloud.conf
[role profile BootNodeAdmin]
management api enabled = false
boot node enabled = true
[role profile ClusterAdmin]
management api enabled = true
[user profile admin]
role = BootNodeAdmin, ClusterAdmin
secret = $6$Mjc/QBXGf2Y$/l9f2jVxbeKkkk5KiyPxMD4k0MQggVLhZvjLI9NWD1CO2Fwzs1dyDsyKJ7RewfSG9nBipLMO0ySq7IlTvC5C2.
[user profile piston-admin]
role = ClusterAdmin
secret = $6$rounds=60000$Example.Jkvnr3vC$wRiggCNQhj/qthYCLqFTFPOs2eil.0DsAe8qGw.UyQEejk9u6qk/hhWdwrYFIdArbmY4RGxVw7
[network]
host_net=172.16.0.0/24
host_bootnode_ip=172.16.0.2
management_net=172.16.1.0/24
services_net=172.16.2.0/26
services_vlan=2
cloud_net=172.16.3.0/24
cloud_vlan=3
public_net=172.16.4.0/24
public_vlan=4
ntp_servers=pool.ntp.org
dns_servers=8.8.8.8,8.8.4.4
type=nova-network
[disk profile ceph]
count_min=1
size_min=100GB
ssd=always
partitions=ceph_journal,ceph_journal,ceph_monitor,identity,ceph_data
priority=1
[disk profile ephemeral]
count_min=1
size_min=500GB
ssd=never
partitions=identity,ceph_data,ephemeral:500GB
priority=2
[auth]
type=local
[local_auth]
admin_username=admin
admin_password=$6$2EFLpDNp$Example.reAyhjN90s.qORBBABvA0CExsiVcrKgZwz5uOwlLW7rRrCZJXjA5dQfHlA7L11c2n37nhcRav0aaa1
[ldap_auth]
url=ldap://ldap.example.com
user=CN=ldapadmin,CN=Users,DC=example,DC=com
password=BadPassword
suffix=DC=example,DC=com
tenant_tree_dn=OU=Piston,DC=example,DC=com
tenant_objectclass=organizationalUnit
tenant_id_attribute=ou
tenant_name_attribute=displayName
user_tree_dn=CN=Users,DC=example,DC=com
user_objectclass=person
user_id_attribute=cn
user_name_attribute=cn
user_attribute_ignore=password,tenant_id,tenants
user_enabled_attribute=userAccountControl
user_enabled_default=512
user_enabled_mask=2
role_tree_dn=OU=Piston,DC=example,DC=com
role_objectclass=group
role_id_attribute=cn
role_name_attribute=displayName
role_member_attribute=member
[snmp]
enabled=no
community=piston
[dashboard]
[servers]
server_count=5
image_cache_size=204800
ipmi_user=admin
ipmi_pass=adminJon's cloud.conf file:
[role profile BootNodeAdmin]
management api enabled = false
boot node enabled = true
[role profile ClusterAdmin]
management api enabled = true
[user profile admin]
role = BootNodeAdmin, ClusterAdmin
secret = $6$twnChnKbtTmi8$oCVMXrbVivv7U.Ev4bXe3VvWg8o1lNQwdfQxFbZkE/cqJeB7dtGwWjrnrK5VFlkOgHvVq4gdZqUYEgRmrIVga.
[user profile piston-admin]
role = ClusterAdmin
secret = $6$twnChnKbtTmi8$oCVMXrbVivv7U.Ev4bXe3VvWg8o1lNQwdfQxFbZkE/cqJeB7dtGwWjrnrK5VFlkOgHvVq4gdZqUYEgRmrIVga.
[network]
host_net=172.16.0.0/24
host_bootnode_ip=172.16.0.2
management_net=172.16.1.0/24
services_net=172.16.2.0/26
services_vlan=2
cloud_net=172.16.3.0/24
cloud_vlan=3
public_net=172.16.4.0/24
public_vlan=4
ntp_servers=pool.ntp.org
dns_servers=8.8.8.8,8.8.4.4
type=nova-network
[disk profile ceph]
count_min=1
size_min=100GB
ssd=always
partitions=ceph_journal,ceph_journal,ceph_monitor,identity,ceph_data
priority=1
[disk profile ephemeral]
count_min=1
size_min=500GB
ssd=never
partitions=identity,ceph_data,ephemeral:500GB
priority=2
[auth]
type=local
[local_auth]
admin_username=admin
admin_password=$6$twnChnKbtTmi8$oCVMXrbVivv7U.Ev4bXe3VvWg8o1lNQwdfQxFbZkE/cqJeB7dtGwWjrnrK5VFlkOgHvVq4gdZqUYEgRmrIVga.
[ldap_auth]
url=ldap://ldap.example.com
user=CN=ldapadmin,CN=Users,DC=example,DC=com
password=BadPassword
suffix=DC=example,DC=com
tenant_tree_dn=OU=Piston,DC=example,DC=com
tenant_objectclass=organizationalUnit
tenant_id_attribute=ou
tenant_name_attribute=displayName
user_tree_dn=CN=Users,DC=example,DC=com
user_objectclass=person
user_id_attribute=cn
user_name_attribute=cn
user_attribute_ignore=password,tenant_id,tenants
user_enabled_attribute=userAccountControl
user_enabled_default=512
user_enabled_mask=2
role_tree_dn=OU=Piston,DC=example,DC=com
role_objectclass=group
role_id_attribute=cn
role_name_attribute=displayName
role_member_attribute=member
[snmp]
enabled=no
community=piston
[dashboard]
[servers]
server_count=3
image_cache_size=204800
ipmi_user=admin
ipmi_pass=admin