- Login to Bluemix
- Open the account tray by clicking the avatar in the top right
- Click “Get Help”
- Click “Get In Touch”
You can also go to the IBM Client Success Portal and enter your Bluemix ID and password to create or view a Support ticket.
The following list specifies the minimum required browser software for IBM Bluemix:
- Chrome, latest version for your operating system
- Firefox, latest version for your operating system and ESR 38
- Internet Explorer, version 11
- Safari, latest version for Mac
Cloud Foundry command line interface, Version 6.5.1 or later
Known issues and other getting-started problems are addressed in these release notes.
Bluemix known issues
Error when uploading a new certificate. When you are replacing a certification, if there is an error during the upload of the new certificate you might be left without a custom certificate that is covering your domain. While you are determining the cause of the error, you might want to upload the original certification again.
Analytics for Hadoop service known issues
Flume is not supported in this release. If you try to install the Flume service from Apache Ambari, you are logged out of Ambari for as long as the Ambari server is running. If you then try to log in to Ambari, even from a new browser session, you are immediately logged out. This is a current issue with the Ambari server.
Action: Do not attempt to add the Flume service (or any other service) from Ambari.
If you are logged out of Ambari, click Restart from the Analytics for Apache Hadoop service page. The cluster, including the Ambari server, restarts and you can log in.
Do not remove any service from Ambari, because that might affect other services and you will not be able to replace the removed service.
Changing service configurations. Service configurations have been optimized for the environment. Changing these configurations might adversely affect service performance or even the functioning of your cluster.
Action: Make changes only if you are an advanced IOP user who fully understands the implications of changing service configurations and that you are doing so at your own risk.
Restart, Administration, and Launch actions. The Analytics for Apache Hadoop dashboard in Bluemix includes a new Administration action and a Restart action below the Launch action in the pull-down menu.
- Use Launch to access the BigSheets, Text Analytics, or Big SQL feature.
- Use Administration to access the Ambari console.
- Use Restart to shut down and restart the cluster but only if you encounter an error when you select Launch or when cluster performance is poor.
Selecting a Quick Link returns a “Server not found” error. If you click Quick Links and select any link in the list (for example, NameNode UI), the system attempts to load a page whose URL starts with https://mn01.services.dal.bluemix.net:8443/. However, because “mn01” is an internal host name, the URL cannot be resolved successfully.
Action: Follow these steps to associate the IP address of your cluster host name with mn01.services.dal.bluemix.net in your hosts file (/etc/hosts for Linux or C:\Windows\System32\drivers\etc\hosts for Windows):
- Determine your cluster host name. For example, if your Apache Ambari UI opens at https://bi-hadoop-prod-2134.services.dal.bluemix.net:8081, your cluster host name is bi-hadoop-prod-2134.services.dal.bluemix.net.
- To retrieve the IP address of your cluster host name, run the nslookup your_cluster_host_name command from your operating system’s command shell. Record the IP address. For example:
- Add an entry to your hosts file to associate the IP address of your cluster host name with mn01.services.dal.bluemix.net. For example:
BigInsights service known issues
User name ‘Dr. Who’. You can see this user name, which represents an anonymous user, displayed in the YARN interface if your browser is not configured to use Kerberos authentication. Currently, BigInsights on Cloud does not support Kerberos-enabled browsers.
Kerberos authentication is enabled by default.
- Kerberos authentication is enabled by default in this release, and disabling Kerberos authentication is not supported.
- When Kerberos authentication is enabled, you cannot create tables or access tables in the catalog browser from the BigSheets interface.
- When Kerberos authentication is enabled, you cannot run Apache Spark jobs from the beeline shell. Use the Spark shell instead.
Flume is not supported in this release. Flume is installed with the BigInsights service but is not supported because of port configuration issues.
Spark limitations. For information about current Spark limitations, follow these links:
Kafka is not supported in this release. Kafka is currently not supported even though it is installed, and Kafka twitter jobs fail with “java.security.cert.CertPathValidatorException”.
Update Knox topology files to access Quick Links and to run workloads after a failover. When there is a failover for either the NameNode or the Resource Manager, Quick Links in the Apache Ambari user interface do not work as designed, and workloads fail because the NameNode or the Resource Manager is unreachable.
Action: Modify the Knox topology file to point to the currently active NameNode or Resource Manager:
- The location of the Knox topology file for the NameNode on the Ambari server node is /usr/iop/current/knox-server/conf/topologies/default.xml. Search for the property “role” with values HDFSUI, NAMENODE, WEBHDFS and modify the corresponding “url” value to include the URL for the currently active NameNode host.
- The location of the Knox topology file for the Resource Manager on the Ambari server node is /usr/iop/current/knox-server/conf/topologies/yarnui.xml. Search for the property “role” with value YARNUI and modify the corresponding “url” value to include the URL for the currently active Resource Manager.
Hive Ambari views are not supported.
The default URLs for HDFS and YARN Quick Links in the Apache Ambari user interface do not work. To access these Quick Links, manually insert the following URLs into your browser:
- To view “NameNode UI”, use: https://ambari_server_IP_address:8443/gateway/default/hdfs/dfshealth.html#tab-overview
- To view “NameNode logs”, use: https://ambari_server_IP_address:8443/gateway/default/hdfs/logs
- To view “NameNode JMX”, use: https://ambari_server_IP_address:8443/gateway/default/hdfs/jmx
- To view “Thread Stacks”, use: https://ambari_server_IP_address:8443/gateway/default/hdfs/stacks
Alternatively, you can use the host name in place of the IP address. In this case, to ensure that the host name that is used in the links can be resolved, add an entry to the /etc/hosts file of the machine where your browser runs. For example, on Windows operating systems, add the following entry (replacing the variable names with actual values) to the c:/windows/system32/drivers/etc/hosts file: ambari_server_IP_address ambari_server_host_name
- To view “ResourceManager UI”, use: https://ambari_server_host_name:8443/gateway/yarnui/yarn
- To view “ResourceManager logs”, use: https://ambari_server_host_name:8443/gateway/yarnui/yarn/logs
- To view “ResourceManager JMX”, use: https://ambari_server_host_name:8443/gateway/yarnui/yarn/jmx
- To view “Thread Stacks”, use: https://ambari_server_host_name:8443/gateway/yarnui/yarn/stacks
When Resource Manager HA is enabled, there are two Resource Managers defined in the cluster. The Ambari web interface does not show which Resource Manager is active and which one is the standby. To retrieve the host name of the active resource manager, navigate to the Yarn configuration page in the Ambari web interface and look for the host name that is associated with “yarn.resourcemanager.hostname.rm1″. If a YARN URL link fails with error “This is standby RM”, update the /usr/iop/current/knox-server/conf/topologies/yarnui.xml file to point to the correct Resource Manager host name under the YARNUI role. For example:
<service> <role>YARNUI</role> <url>http://resource_manager_host_name:8088</url> </service>
The following InfoSphere BigInsights sample applications do not work in BigInsights on Cloud:
- Web REST Import application
- Data download application
Use the methods described in the BigInsights on Cloud documentation instead.
Cryptic errors when running the Import Export Object store application
While using the Import Export Object Store application, if you pass an invalid credentials file, you might receive one of the following confusing error messages depending on whether you are trying to copy to or from SWIFT:
Main class [com.ibm.biginsights.app.objectstore.ObjectStoreImportExport], main() threw exception, null
In either case, the issue is because of the invalid credentials file. Check the documentation to ensure that the file is created correctly and in the correct format.
Issue when running the Import Export Object store application
If you use the Import Export Object Store application to upload from a directory that contains a single file, the application fails when you try to download the same directory. If you use the application from a directory with multiple files, the upload and download operations work correctly.
You cannot stop Big SQL by running either stop.sh bigsql or start.sh bigsql –force.
You might get the following error messages:
* ssh "bionclouddev1-bluemix-ys0-71-master-1.bicloud.com" "bash -l -c \"db2stop force\"
** 11/19/2014 13:00:44 0 0 SQL1032N No start database manager command was issued.
** 11/19/2014 13:00:44 1 0 SQL1032N No start database manager command was issued.
** 11/19/2014 13:00:44 1 0 SQL1032N No start database manager command was issued.
** 11/19/2014 13:00:44 2 0 SQL1025N The database manager was not stopped because databases are still active.
** 11/19/2014 13:00:44 3 0 SQL1032N No start database manager command was issued.
** 11/19/2014 13:00:44 4 0 SQL1025N The database manager was not stopped because databases are still active.
** 11/19/2014 13:00:44 5 0 SQL1032N No start database manager command was issued.
** 11/19/2014 13:00:44 6 0 SQL1032N No start database manager command was issued.
** SQL6033W Stop command processing was attempted on "7" node(s). "0" node(s) were successfully stopped. "5" node(s) were already stopped. "2" node(s) could not be stopped.
Action: To resolve this problem you can force all Big SQL nodes stop or only the node reported on the message SQL1025.
To force all Big SQL nodes to stop and clean up all resources:
- Log in to the Big SQL head node. To locate this node, find Big SQL on the Cluster Status page in the BigInsights web console and select Big SQL from the left navigation. On the right under Big SQL Head Node, you will see Big SQL and Big SQL Version 1. You want Big SQL.
- Enter: su bigsql
- Enter: sqllib/bin/db2_kill
To force only the node reported on SQL1025:
- Log in to the node that you want to stop
- Enter: su bigsql
- Enter: sqllib/bin/db2nkill [node#]
- Enter: sqllib/bin/ipclean
In either case, restart Big SQL from the Big SQL console.
You might run into an issue when running some TPC-H benchmark queries
When running the TPC-H benchmark queries, you might encounter an intermittent issue when running queries 7, 8, and 9 on very large data sets. You might receive a timeout error. Complete documentation for the queries is available from the Transaction Processing Performance Council web site at tpc.org. Queries 7, 8, and 9 are described in detail on pages 39 through 44 of the TPC Benchmark H Standard Specification, Revision 2.17.0. These issues have been resolved in a subsequent InfoSphere BigInsights fix pack that is not yet available on the BigInsights service.
Signing out of the BigInsights service does not sign you out of Bluemix
You must log out of Bluemix separately to completely log out.