Install and Manage Node Versions with NVM

It’s very easy to install and manage multiple active node.js versions by Node Version Manager(NVM).

Install or update nvm

First you’ll need to make sure your system has a c++ compiler. For OSX, XCode will work. And then install or update nvm by the following command:

1
2
3
# The script clones the nvm repository to ~/.nvm and adds the source line to your profile (~/.bash_profile, ~/.zshrc or ~/.profile).
$ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash

Getting Started with Vim by Vimtutor

———————- Vim: The God of Editors ———————-

This is a simple Vim Tutorial from vim built-in documents, you can get the whole vimtutor by typing vimtutor in shell or vimtutor -g for GUI version. It is intended to give a brief overview of the Vim editor, just enough to allow you to use the editor fairly easily.

Lesson 1: Text Editing Commands

1
2
3
4
5
6
7
8
9
10
1. The cursor is moved using either the arrow keys or the hjkl keys:
h (left) j (down) k (up) l (right)
2. To start Vim from the shell prompt type: vim FILENAME <ENTER>
3. To exit Vim type: <ESC> :q! <ENTER> to trash all changes.
OR type: <ESC> :wq <ENTER> to save the changes.
OR type: <ESC> shift + zz to save the changes
4. To delete the character at the cursor type: x
5. To insert or append text type:
i type inserted text <ESC> insert before the cursor
A type appended text <ESC> append after the line

Hacking PySpark inside Jupyter Notebook

Python is a wonderful programming language for data analytics. Normally, I prefer to write python codes inside Jupyter Notebook (previous known as IPython), because it allows us to create and share documents that contain live code, equations, visualizations and explanatory text. Apache Spark is a fast and general engine for large-scale data processing. PySpark is the Python API for Spark. So it’s a good start point to write PySpark codes inside jupyter if you are interested in data science:

1
IPYTHON_OPTS="notebook" pyspark --master spark://localhost:7077 --executor-memory 7g

Hacking PySpark inside Jupyter Notebook

NPM Playbook

NPM (node package manager) is a package management tool for Node.js.
Node.js is an open source JavaScript runtime built on Chrome’s V8 JavaScript engine. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. Note that Node.js is a server side runtime environment rather than a language.

Initial project

package.json will be firstly created by npm init:

1
$ npm init # create package.json

Spark Source Codes 01 Submit and Run Jobs

standalone mode

1
$ cd {SPARK_HOME}/libexec/sbin/

Start Master at 8080,

org.apache.spark.deploy.master.Master
onStart()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# spark command: java -Xms1g -Xmx1g org.apache.spark.deploy.master.Master
# --ip localhost --port 7077 --webui-port 8080
$ ./start-master.sh
Output Logs:
16/01/10 20:45:23 INFO Master: Registered signal handlers for [TERM, HUP, INT]
16/01/10 20:45:23 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/01/10 20:45:24 INFO SecurityManager: Changing view acls to: tony
16/01/10 20:45:24 INFO SecurityManager: Changing modify acls to: tony
16/01/10 20:45:24 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(tony); users with modify permissions: Set(tony)
16/01/10 20:45:24 INFO Utils: Successfully started service 'sparkMaster' on port 7077.
16/01/10 20:45:24 INFO Master: Starting Spark master at spark://localhost:7077
16/01/10 20:45:24 INFO Master: Running Spark version 1.6.0
16/01/10 20:45:24 INFO Utils: Successfully started service 'MasterUI' on port 8080.
16/01/10 20:45:24 INFO MasterWebUI: Started MasterWebUI at http://192.168.0.112:8080
16/01/10 20:45:24 INFO Utils: Successfully started service on port 6066.
16/01/10 20:45:24 INFO StandaloneRestServer: Started REST server for submitting applications on port 6066
16/01/10 20:45:24 INFO Master: I have been elected leader! New state: ALIVE

Start Worker at 8081

onStart() => registerWithMaster()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# spark command: java -Xms1g -Xmx1g org.apache.spark.deploy.worker.Worker
# --webui-port 8081 spark://localhost:7077
$ ./start-slave.sh spark://localhost:7077
Output Logs:
16/01/10 20:50:45 INFO Worker: Registered signal handlers for [TERM, HUP, INT]
16/01/10 20:50:45 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/01/10 20:50:45 INFO SecurityManager: Changing view acls to: tony
16/01/10 20:50:45 INFO SecurityManager: Changing modify acls to: tony
16/01/10 20:50:45 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(tony); users with modify permissions: Set(tony)
16/01/10 20:50:46 INFO Utils: Successfully started service 'sparkWorker' on port 49576.
16/01/10 20:50:46 INFO Worker: Starting Spark worker 192.168.0.112:49576 with 4 cores, 7.0 GB RAM
16/01/10 20:50:46 INFO Worker: Running Spark version 1.6.0
16/01/10 20:50:46 INFO Worker: Spark home: /usr/local/Cellar/apache-spark/1.6.0/libexec
16/01/10 20:50:46 INFO Utils: Successfully started service 'WorkerUI' on port 8081.
16/01/10 20:50:46 INFO WorkerWebUI: Started WorkerWebUI at http://192.168.0.112:8081
16/01/10 20:50:46 INFO Worker: Connecting to master localhost:7077...
16/01/10 20:50:46 INFO Worker: Successfully registered with master spark://localhost:7077

Start Spark-shell over cluster on http://localhost:4040

1
$ MASTER=spark://localhost:7077 spark-shell

14526662171327

1
scala> sc.textFile("README.md").filter(_.contains("Spark")).count

14526662553694

sc.textFile(“”)

RDD Object

DAGScheduler: error between stages

==TaskSet===>

TaskScheduler: error inside stage

org.apache.spark.scheduler.TaskScheduler

|