Have a technical how-to question? Not sure how to resolve that error code? Post your question on Stack Overflow and get answers from our extended Bluemix community!
Have a non-technical question about your Bluemix account or the Bluemix Service offerings? Want to know how to get started? Ask us on dW Answers!
- Login to Bluemix
- Open the account tray by clicking the avatar in the top right
- Click “Get Help”
- Click “Get In Touch”
You can also go to the IBM Client Success Portal and enter your Bluemix ID and password to create or view a Support ticket.
The following list specifies the minimum required browser software for IBM Bluemix:
- Chrome, latest version for your operating system
- Firefox, latest version for your operating system and ESR 38
- Internet Explorer, version 11
- Safari, latest version for Mac
Cloud Foundry command line interface, Version 6.5.1 or later
Known issues and other getting-started problems are addressed in these release notes.
Bluemix known issues
Error when uploading a new certificate. When you are replacing a certification, if there is an error during the upload of the new certificate you might be left without a custom certificate that is covering your domain. While you are determining the cause of the error, you might want to upload the original certification again.
BigInsights service known issues
Text Analytics “Run on Cluster” feature not working. If the “Run on Cluster” feature in your web console has not been used in more than 24 hrs, it will fail when a Hadoop MapReduce job is submitted to your cluster’s YARN service. This occurs because of an outstanding issue with obtaining NameNode delegation tokens for the job being submitted by the web tool on behalf of its user. Restart the Text Analytics service.
BigSheets not working. If the BigSheets service has not been used in more than 24 hrs, subsequent BigInsights BigSheets operations will no longer work. This occurs because of an outstanding issue with obtaining NameNode delegation tokens for the job being submitted by BigSheets on behalf of its user. Restart the BigSheets service.
Problems with links in the Job History or ResourceManager UI or logs. You might encounter various problems with Job History or Resource Manager links in the Ambari web console because of current limitations with Knox 0.6.0. Errors include 404, “job not found”, and “user dr.who is not authorized.” For example:
- MapReduce2 -> Quick Links -> JobHistory UI -> Job ID -> Node link -> List Of Containers returns error 404 (RFC 2616)
- MapReduce2 -> Quick Links -> JobHistory UI -> Job ID -> Node link -> List Of Applications -> ID -> History -> many links (for example “Logs”, “Map”, “Reduce”, and so on) return “job not found” error
- MapReduce2 -> Quick Links -> JobHistory UI -> Job ID -> Node link -> NodeManager -> Node Information -> Node HTTP Address is not correct when Knox is enabled
- MapReduce2 -> Quick Links -> JobHistory UI -> Tools -> Configuration Local logs returns “user dr.who is unauthorized to access this page” error
- MapReduce2/YARN -> Quick Links -> JobHistory UI/ResourceManager UI -> Job ID -> Logs returns “user [dr.who] is not authorized to view the logs for container” error
- MapReduce2/YARN -> Quick Links -> JobHistory logs/ResourceManager logs returns “user dr.who is unauthorized to access this page” error
- HDFS -> Quick Links -> NameNode logs returns “user dr.who is unauthorized to access this page” error
The biadmin user currently does not belong to the SYSMAINT group. The Data Server Manager for Big SQL (DSM) cannot monitor utilities (for example, load) when this authority is not held.
User name ‘Dr. Who’. You can see this user name, which represents an anonymous user, displayed in the YARN interface if your browser is not configured to use Kerberos authentication. Currently, BigInsights on Cloud does not support Kerberos-enabled browsers.
Kerberos authentication is enabled by default.
- Kerberos authentication is enabled by default in this release, and disabling Kerberos authentication is not supported.
- When Kerberos authentication is enabled, you cannot create tables or access tables in the catalog browser from the BigSheets interface.
- When Kerberos authentication is enabled, you cannot run Apache Spark jobs from the beeline shell. Use the Spark shell instead.
Flume is not supported in this release. Flume is installed with the BigInsights service but is not supported because of port configuration issues.
Spark limitations. For information about current Spark limitations, follow these links:
Kafka is not supported in this release. Kafka is currently not supported even though it is installed, and Kafka twitter jobs fail with “java.security.cert.CertPathValidatorException”.
Update Knox topology files to access Quick Links and to run workloads after a failover. When there is a failover for either the NameNode or the Resource Manager, Quick Links in the Apache Ambari user interface do not work as designed, and workloads fail because the NameNode or the Resource Manager is unreachable.
Action: Modify the Knox topology file to point to the currently active NameNode or Resource Manager:
- The location of the Knox topology file for the NameNode on the Ambari server node is /usr/iop/current/knox-server/conf/topologies/default.xml. Search for the property “role” with values HDFSUI, NAMENODE, WEBHDFS and modify the corresponding “url” value to include the URL for the currently active NameNode host.
- The location of the Knox topology file for the Resource Manager on the Ambari server node is /usr/iop/current/knox-server/conf/topologies/yarnui.xml. Search for the property “role” with value YARNUI and modify the corresponding “url” value to include the URL for the currently active Resource Manager.
Hive Ambari views are not supported.
The default URLs for HDFS and YARN Quick Links in the Apache Ambari user interface do not work. To access these Quick Links, manually insert the following URLs into your browser:
- To view “NameNode UI”, use: https://ambari_server_IP_address:8443/gateway/default/hdfs/dfshealth.html#tab-overview
- To view “NameNode logs”, use: https://ambari_server_IP_address:8443/gateway/default/hdfs/logs
- To view “NameNode JMX”, use: https://ambari_server_IP_address:8443/gateway/default/hdfs/jmx
- To view “Thread Stacks”, use: https://ambari_server_IP_address:8443/gateway/default/hdfs/stacks
Alternatively, you can use the host name in place of the IP address. In this case, to ensure that the host name that is used in the links can be resolved, add an entry to the /etc/hosts file of the machine where your browser runs. For example, on Windows operating systems, add the following entry (replacing the variable names with actual values) to the c:/windows/system32/drivers/etc/hosts file: ambari_server_IP_address ambari_server_host_name
- To view “ResourceManager UI”, use: https://ambari_server_host_name:8443/gateway/yarnui/yarn
- To view “ResourceManager logs”, use: https://ambari_server_host_name:8443/gateway/yarnui/yarn/logs
- To view “ResourceManager JMX”, use: https://ambari_server_host_name:8443/gateway/yarnui/yarn/jmx
- To view “Thread Stacks”, use: https://ambari_server_host_name:8443/gateway/yarnui/yarn/stacks
When Resource Manager HA is enabled, there are two Resource Managers defined in the cluster. The Ambari web interface does not show which Resource Manager is active and which one is the standby. To retrieve the host name of the active resource manager, navigate to the Yarn configuration page in the Ambari web interface and look for the host name that is associated with “yarn.resourcemanager.hostname.rm1″. If a YARN URL link fails with error “This is standby RM”, update the /usr/iop/current/knox-server/conf/topologies/yarnui.xml file to point to the correct Resource Manager host name under the YARNUI role. For example:
<service> <role>YARNUI</role> <url>http://resource_manager_host_name:8088</url> </service>
The following InfoSphere BigInsights sample applications do not work in BigInsights on Cloud:
- Web REST Import application
- Data download application
Use the methods described in the BigInsights on Cloud documentation instead.
Cryptic errors when running the Import Export Object store application
While using the Import Export Object Store application, if you pass an invalid credentials file, you might receive one of the following confusing error messages depending on whether you are trying to copy to or from SWIFT:
Main class [com.ibm.biginsights.app.objectstore.ObjectStoreImportExport], main() threw exception, null
In either case, the issue is because of the invalid credentials file. Check the documentation to ensure that the file is created correctly and in the correct format.
Issue when running the Import Export Object store application
If you use the Import Export Object Store application to upload from a directory that contains a single file, the application fails when you try to download the same directory. If you use the application from a directory with multiple files, the upload and download operations work correctly.
You cannot stop Big SQL by running either stop.sh bigsql or start.sh bigsql –force.
You might get the following error messages:
* ssh "bionclouddev1-bluemix-ys0-71-master-1.bicloud.com" "bash -l -c \"db2stop force\"
** 11/19/2014 13:00:44 0 0 SQL1032N No start database manager command was issued.
** 11/19/2014 13:00:44 1 0 SQL1032N No start database manager command was issued.
** 11/19/2014 13:00:44 1 0 SQL1032N No start database manager command was issued.
** 11/19/2014 13:00:44 2 0 SQL1025N The database manager was not stopped because databases are still active.
** 11/19/2014 13:00:44 3 0 SQL1032N No start database manager command was issued.
** 11/19/2014 13:00:44 4 0 SQL1025N The database manager was not stopped because databases are still active.
** 11/19/2014 13:00:44 5 0 SQL1032N No start database manager command was issued.
** 11/19/2014 13:00:44 6 0 SQL1032N No start database manager command was issued.
** SQL6033W Stop command processing was attempted on "7" node(s). "0" node(s) were successfully stopped. "5" node(s) were already stopped. "2" node(s) could not be stopped.
Action: To resolve this problem you can force all Big SQL nodes stop or only the node reported on the message SQL1025.
To force all Big SQL nodes to stop and clean up all resources:
- Log in to the Big SQL head node. To locate this node, find Big SQL on the Cluster Status page in the BigInsights web console and select Big SQL from the left navigation. On the right under Big SQL Head Node, you will see Big SQL and Big SQL Version 1. You want Big SQL.
- Enter: su bigsql
- Enter: sqllib/bin/db2_kill
To force only the node reported on SQL1025:
- Log in to the node that you want to stop
- Enter: su bigsql
- Enter: sqllib/bin/db2nkill [node#]
- Enter: sqllib/bin/ipclean
In either case, restart Big SQL from the Big SQL console.
You might run into an issue when running some TPC-H benchmark queries
When running the TPC-H benchmark queries, you might encounter an intermittent issue when running queries 7, 8, and 9 on very large data sets. You might receive a timeout error. Complete documentation for the queries is available from the Transaction Processing Performance Council web site at tpc.org. Queries 7, 8, and 9 are described in detail on pages 39 through 44 of the TPC Benchmark H Standard Specification, Revision 2.17.0. These issues have been resolved in a subsequent InfoSphere BigInsights fix pack that is not yet available on the BigInsights service.
Signing out of the BigInsights service does not sign you out of Bluemix
You must log out of Bluemix separately to completely log out.