Springe zum Hauptinhalt

# Vita

## Matthis Kruse

### Personal

 Birthplace Eschwege Date of Birth 23.08.1997 E-Mail s8makrus[at]stud.uni-saarland.de Jabber/XMPP dasnacl[at]dasnacl.de Github https://github.com/DasNaCl

### Education

Start End School/University
08.2013 06.2016 Abitur at Oberstufengymnasium des Werra-Meißner-Kreises (Mark: 2.8)
10.2016 today Studying Computer Science at Saarland University

### Jobs

Start End At
02.11.2017 31.12.2017 Tutor 'Mathematik-Vorkurs'

### Fields of Interest

• Compiler Design and Programming Languages in General
• Optimization
• Verification

# BFC - Part 6

## BFC - Part 6

Another thing we can do to improve our previously implemented optimization is encoding the operations in a more low level manner. Instead of storing the `length` as `std::size_t`, we store it as an integer now. Then, instead of having to differentiate between `+`, being `*ptr += 1`, and `-`, being `*ptr -= 1`, we can just do `*ptr += length`, since `length` can now have negative values. To start things off, we implement the following stubs:

```int statement::len() const
{
return length;
}

void statement::inc_len(int l)
{
length += l;
}
```

Our `length_encoder` can then be improved further like this:

```void length_encoder::visit_seq(statement_seq& seq)
{
auto last_it = seq.begin();
for(auto cur_it = seq.begin() + 1; cur_it != seq.end(); )
{
auto last = (*last_it)->kind();
auto cur  = (*cur_it)->kind();

const bool last_is_seq = last == token::Unknown || last == token::L_Square;

if(compatible(cur, last) && !last_is_seq)
{
(*last_it)->inc_len((*cur_it)->len());
cur_it = seq.erase(cur_it);
}
else
{
if(last_is_seq)
{
// recursive call to optimize in loops
(*last_it)->optimize(*this);
}
++last_it;
++cur_it;
}
}
}
```

Some small parts are left to make this thing work. For example, you shouldn't forget to set the length to `-1` in case the token is a `token_kind::Minus`. Don't forget to overload `inc_len(int)` for `arithmetic` as well, the prefix `inc` in `inc_len` stands for increasing the absolute value, so literally the "length". If you, however, forget this, stuff like `-+` will become a `Minus`-AST-Node of length `0`, which should never happen! If we ever encouter that, our length encoder should have optimized that away. Anyways, things like `interpret` or `print_c` now look like this:

```void arithmetic::interpret(bf_machine& m) noexcept
{
switch(tok)
{
default:
assert(false); // Hard logic error

case token::Plus:
case token::Minus:   { m.mem[m.position] += length; } break;

case token::R_Shift:
case token::L_Shift: { m.position += length; } break;
}
}

void arithmetic::print_c(std::ostream& os)
{
switch(tok)
{
default:
assert(false); // Hard logic error

case token::Plus:
case token::Minus:   { os << "*ptr += " << length << ";"; } break;

case token::R_Shift:
case token::L_Shift: { os << "ptr += " << length << ";"; } break;
}
}
```

# BFC - Part 5

## BFC - Part 5

A simple optimization we could already implement is an AST-Transformation. The idea is to do `*ptr += x` instead of `++*ptr` for `x` times. This is can be interpreted as "length encoding". To start things off, we add the field `std::size_t length` to a statement and set it to `1` initially. Then we modify our `interpret` and possibly other members (for example, `print_c`) like so:

```void arithmetic::interpret(bf_machine& m) noexcept
{
switch(tok)
{
default:
assert(false); // Hard logic error

case token::Plus:    { m.mem[m.position] += length; } break;
case token::Minus:   { m.mem[m.position] -= length; } break;

case token::R_Shift: { m.position += length; } break;
case token::L_Shift: { m.position -= length; } break;
}
}

void arithmetic::print_c(std::ostream& os)
{
switch(tok)
{
default:
assert(false); // Hard logic error

case token::Plus:    { os << "*ptr += " << length << ";"; } break;
case token::Minus:   { os << "*ptr -= " << length << ";"; } break;

case token::R_Shift: { os << "ptr += " << length << ";"; } break;
case token::L_Shift: { os << "ptr -= " << length << ";"; } break;
}
}
```

…and so on. Now the interesting part: How do we get to it? How do we calculate the correct length? The standard "first idea" approach is sufficient. Just scan through all node lineary and see if you've got the same node a bunch of times right after another. If that is the case, delete one and increase the length for the other. However, other approaches crunching down the worst case running time are also possible. You could even parallelize this! Anyways, let's get it going. What we need is an optimize function for any `statement_seq`:

```void statement_seq::optimize()
{
auto last_it = statements.begin();
for(auto cur_it = statements.begin() + 1; cur_it != statements.end(); )
{
auto last = (*last_it)->tok;
auto cur  = (*cur_it)->tok;

const bool last_is_seq = last == token::Unknown || last == token::L_Square;

if(last == cur && !last_is_seq)
{
cur_it = statements.erase(cur_it);
(*last_it)->length += 1U;
}
else
{
if(last_is_seq)
{
// recursive call to optimize in loops
(*last_it)->optimize();
}
++last_it;
++cur_it;
}
}
}
```

### 2 + 5

Optimizing the Brainfuck code `++>+++++[<+>-]++++++++[<++++++>-]<.` which evaluates the expression `2 + 5` then yields to the following C output:

```#include <stdio.h>
int main(int argc, char** argv) {
char array[30000] = {0}; char* ptr = array;
*ptr += 2;
ptr += 1;
*ptr += 5;
while(*ptr) {
ptr -= 1;
*ptr += 1;
ptr += 1;
*ptr -= 1;
}
*ptr += 8;
while(*ptr) {
ptr -= 1;
*ptr += 6;
ptr += 1;
*ptr -= 1;
}
ptr -= 1;
for(int i = 0; i < 1; ++i) putchar(*ptr);
}
```

As you can see, sequential calls get optimized away, `++*ptr;++*ptr` became a simple `*ptr += 2;`. There is much more room for optimization, however, we want to turn specific optimization passes on or off later on, thus we'll need a more object oriented approach: Our good ol' friend, the visitor-pattern:

```class statement_optimizer
{
public:
virtual void visit_statement(statement& stat);
virtual void visit_seq(statement_seq& seq);
virtual void visit_arithmetic(arithmetic& ari);
virtual void visit_serialization(serialization& seri);
virtual void visit_loop(loop& lo);
};
```

Each statement now contains the potentially overriden method `void optimize(statement_optimizer& visitor)`, very simple to implement:

```void statement::optimize(statement_optimizer& visitor)
{
visitor.visit_statement(*this);
}

void statement_seq::optimize(statement_optimizer& visitor)
{
statement::optimize(visitor);

visitor.visit_seq(*this);
}

void arithmetic::optimize(statement_optimizer& visitor)
{
statement::optimize(visitor);

visitor.visit_arithmetic(*this);
}

void serialization::optimize(statement_optimizer& visitor)
{
statement::optimize(visitor);

visitor.visit_serialization(*this);
}

void loop::optimize(statement_optimizer& visitor)
{
// statement::optimize is already called in statement_seq
statement_seq::optimize(visitor);

visitor.visit_loop(*this);
}
```

Our length encoding optimization pass can then be written as a visitor, like so:

```class length_encoder : public statement_optimizer
{
public:
void visit_seq(statement_seq& seq) override
{
auto last_it = seq.begin();
for(auto cur_it = seq.begin() + 1; cur_it != seq.end(); )
{
auto last = (*last_it)->kind();
auto cur  = (*cur_it)->kind();

const bool last_is_seq = last == token::Unknown || last == token::L_Square;

if(last == cur && !last_is_seq)
{
cur_it = seq.erase(cur_it);
(*last_it)->inc_len();
}
else
{
if(last_is_seq)
{
// recursive call to optimize in loops
(*last_it)->optimize(*this);
}
++last_it;
++cur_it;
}
}
}
};
```

And that's it! Now it's pretty easy to write optimization passes without interfering with our "main classes". This way, you could work on an external library which solely optimizes Brainfuck code, nicely hidden in another repository.

# BFC - Part 4

## BFC - Part 4

To have some fun, we can equip the compiler with an interpreter. It's now very simple to do that, just give every AST-node an interpret member function and execute that from the root on. Brainfuck depends on some memory block (originally 30000 cells) and a location in there, so before we add all those `interpret()` members, we firstly create a simple struct containing everything we need to execute Brainfuck:

```struct bf_machine
{
/*implicit*/ bf_machine(std::size_t size = 30000U)
: mem(), position(0U)
{
mem.resize(size);
}

std::vector<unsigned char> mem;
std::size_t position;
};
```

We could capsule the memory and position better, add some bounds checking or "infinite memory". But for now, keep it simple, stupid. We just want to interpret some Brainfuck, man.

```void statement_seq::interpret(bf_machine& m) noexcept
{
for(auto& p : statements)
p->interpret(m);
}

void arithmetic::interpret(bf_machine& m) noexcept
{
switch(tok)
{
default:
assert(false); // Hard logic error

case token::Plus:
{
++m.mem[m.position];
} break;
case token::Minus:
{
++m.mem[m.position];
} break;

case token::L_Shift:
{
++m.position;
} break;
case token::R_Shift:
{
--m.position;
} break;
}
}

void serialization::interpret(bf_machine& m) noexcept
{
if(tok == token::Comma)
m.mem[m.position] = std::cin.get();
else
std::cout.put(m.mem[m.position]);
}

void loop::interpret(bf_machine& m) noexcept
{
while(m.mem[m.position])
statement_seq::interpret(m);
}
```

Aaaaand it's done. It can easily execute now `helloworld.bf` or `hanoi.bf`. We can do quite similar things to implement a pretty printer or a language-to-language compiler, so simply outputting C code. Last thing is presented in the following code, anything else is left as a playground for the reader.

```void statement_seq::print_c(std::ostream& os)
{
for(auto& p : statements)
{
p->print_c(os);
os << "\n";
}
}

void arithmetic::print_c(std::ostream& os)
{
switch(tok)
{
default:
assert(false); // Hard logic error

case token::Plus:    { os << "++*ptr;"; } break;
case token::Minus:   { os << "--*ptr;"; } break;

case token::L_Shift: { os << "--ptr;"; } break;
case token::R_Shift: { os << "++ptr;"; } break;
}
}

void serialization::print_c(std::ostream& os)
{
os << (tok == token::Dot
?
"putchar(*ptr);"
:
"*ptr = getchar();");
}

void loop::print_c(std::ostream& os)
{
os << "while(*ptr) {\n";

statement_seq::print_c(os);

os << "}\n";
}
```

If we now output the C boilerplate properly, you can get a very simple language-to-language compiler already just by the following code:

```int main(int argc, char** argv)
{
if(argc < 2)
{
std::cerr << "No input file." << std::endl;
return 0;
}
lexer lex(argv[1]);
parser par(lex);

auto ptr = par.parse();

std::cout << "#include <stdio.h>\n"
<< "int main(int argc, char** argv) {\n"
<< "char array[30000] = {0}; char* ptr = array;\n";

ptr->print_c(std::cout);

std::cout << "}\n";
}
```

# BFC - Part 3

## BFC - Part 3

Now that we have our (in case of Brainfuck very simple) AST, we want to generate it from the input. Remember that we've written a tokenizer in the past to transform the string of characters into a stream of tokens. The job of the parser is now to construct our abstract syntax tree. (Note: Pedantically, the parser's real job is to verify the input, AST-construction is just a nice addon) In case of Brainfuck the parser is very simple, since the only root of any errors lies in loops. Anything else can be directly transformed into an AST-node. So implementing that conversion (leaving loops out for the moment) is pretty straight forward:

```class parser
{
public:
explicit parser(lexer& lex) noexcept
: lex(lex)
{  }

statement::ptr parse()
{
auto statements = std::make_unique<statement_seq>();

token tok = lex.next();
while(tok != token::Unknown)
{
switch(tok.kind)
{
case token::Dot:
case token::Comma:
{
auto x = std::make_unique<serialization>(tok.kind);
} break;

case token::Plus:
case token::Minus:
case token::L_Shift:
case token::R_Shift:
{
auto x = std::make_unique<arithmetic>(tok.kind);
} break;

case token::L_Square:
case token::R_Square:
{
// TODO
} break;

default:
assert(false && "Logic error: Tokenizer did not filter unknown tokens");
}
tok = lex.next();
}
return std::move(statements);
}
private:
lexer& lex;
};
```

As you can see, there is nothing too special. Loops in itself are quite straight forward to implement as well. However, instead of adding them to the outermost `statement_seq` we have to add them to the loop. This is no real problem, a simple implementation using a queue is sufficient:

```statement::ptr parse()
{
auto statements = std::make_unique<statement_seq>();

std::vector<loop*> loops;
token tok = lex.next();
while(tok != token::Unknown)
{
switch(tok.kind)
{
case token::Dot:
case token::Comma:
{
auto x = std::make_unique<serialization>(tok.kind);
if(loops.empty())
else
} break;

case token::Plus:
case token::Minus:
case token::L_Shift:
case token::R_Shift:
{
auto x = std::make_unique<arithmetic>(tok.kind);
if(loops.empty())
else
} break;

case token::L_Square:
case token::R_Square:
{
if(tok == token::R_Square && loops.empty())
{
throw std::invalid_argument("Pre-Closed loop");
}
else if(tok == token::R_Square)
{
loops.pop_back();
}
else
{
auto x = std::make_unique<loop>(tok.kind);
if(loops.empty())
{
auto* p = static_cast<loop*>(statements->get_last().get());
loops.emplace_back(p);
}
else
{
auto* p = static_cast<loop*>(loops.back()->get_last().get());
loops.emplace_back(p);
}
}
} break;

default:
assert(false && "Logic error: Tokenizer did not filter unknown tokens");
}
tok = lex.next();
}
return std::move(statements);
}
```

And now we have a nice and fresh AST-generator. Time to give functionality to the AST!

# BFC - Part 2

## BFC - Part 2

### Abstract Syntax Tree

Now that we have tokenized the source file, we need to convert those token into some other representation. What we would like to do is to generate a so called abstract syntax-tree, which contains our Brainfuck-program in a tree like structure. Look at the following, very simple program `++[-><].`. We want a tree which looks like this:

As seen, we can describe all statements with the types `arithmetic`, `loop`, `serialization` and `statement-seq`. However, the last statement is more or less a virtual one, since it doesn't represent a language feature. It could be easily omitted, nevertheless we see that it becomes quite useful later on. Anyways, this yields to:

```class statement
{
public:
using ptr = std::unique_ptr<statement>;
public:
explicit statement(token_kind tok) noexcept
: tok(tok)
{  }

virtual ~statement() noexcept
{  }
protected:
token_kind tok;
};

class statement_seq : public statement
{
public:
explicit statement_seq()
: statement(token_kind::Unknown), statements()
{  }
private:
std::vector<statement::ptr> statements;
};

class arithmetic : public statement
{
public:
explicit arithmetic(token_kind tok) noexcept
: statement(tok)
{
assert(tok == token_kind::Plus    || tok == token_kind::Minus
|| tok == token_kind::L_Shift || tok == token_kind::R_Shift);
}
};

class serialization : public statement
{
public:
explicit serialization(token_kind tok) noexcept
: statement(tok)
{
assert(tok == token_kind::Dot || tok == token_kind::Comma);
}
};

class loop : public statement
{
public:
explicit loop(token_kind tok)
: statement(tok), statements()
{
assert(tok == token_kind::L_Square);
}
private:
std::vector<statement::ptr> statements;
};
```

You might wonder why we convert tokens into an abstract syntax tree. Reason is simple: It's very easy to interpret, optimize, generate code or even pretty print code if you've got that data structure. Needless to say, we equip our statements with an `interpret`-memberfunction. Another thing that can happen right away is yet another abstraction. `statement_seq` and `loop` are very similar. Simple solution is to let `loop` inherit from `statement_seq`:

```class statement_seq : public statement
{
public:
explicit statement_seq()
: statement(token_kind::Unknown), statements()
{  }
virtual ~statement_seq() noexcept
{  }

{
statements.emplace_back(std::move(ptr));
}
statement::ptr& get_last()
{
return statements.back();
}
protected:
std::vector<statement::ptr> statements;
};

class loop : public statement_seq
{
public:
explicit loop(token_kind tok)
: statement_seq()
{
this->tok = tok;
assert(tok == token_kind::L_Square);
}
};
```

# BFC - Part 1

## BFC - Part 1

I always wanted to write a compiler/interpreter thingy for Brainfuck. So there is that, yet another Brainfuck compiler is going to be developed.

### Tokens

We start with probably the most basic part of it: Tokens and the Tokenizer. A tokenizer, also called scanner or lexer, converts the input, which is practically always a string, into tokens. Tokens are internal representations of the language's symbols. Without further ado, here are the tokens:

```enum class token_kind : char
{
Plus     = '+',
Minus    = '-',
L_Shift  = '<',
R_Shift  = '>',

L_Square = '[',
R_Square = ']',

Dot      = '.',
Comma    = ',',

Unknown  = 0
};

struct token
{
token_kind kind { token_kind::Unknown };
std::size_t column { 0U };
std::size_t row { 0U };
};
```

Note that this implementation might look too complex for Brainfuck, however, when we parse the source, we want to be able to give an error location to the user. (which is a well received feature by any compiler-using community) Thus, we need a row and a column. What can also help are some operators to compare a `token` with some `token_kind` and possibly some input and output stream operators. Implementation of those is left as a tedious exercise to the reader.

### Lexer

Now, for the lexer, we process the source file line by line, character by character. The implementation is pretty straight forward, since in Brainfuck anything besides the six symbols `+`, `-`, `<`, `>`, `[`, `]`, `.` and `​,​` acts as a comment. Therefore, our automaton to lex stuff doesn't even need a specialized "comment mode", just ignoring anything besides `+-<>[].​,​` yields to success. Okay, I lied a bit: One thing we need to take into account are CR and CRLF for the rows. You don't want to the row to be wrong.

```class lexer
{
public:
explicit lexer(const std::string& filepath)
: lookup(), file(filepath, std::ios::in), col(1U), row(1U)
{
lookup['.'] = token_kind::Dot;
lookup[','] = token_kind::Comma;
lookup['+'] = token_kind::Plus;
lookup['-'] = token_kind::Minus;
lookup['<'] = token_kind::L_Shift;
lookup['>'] = token_kind::R_Shift;
lookup['['] = token_kind::L_Square;
lookup[']'] = token_kind::R_Square;
}

token next()
{
char last = 0;
char ch = 0;
token t {token_kind::Unknown, col, row};
if(!(file >> ch))
{
// EOF
}
else
inc_col_or_row(last, ch);

auto it = lookup.find(ch);
while(it == lookup.end())
{
last = ch;
if(!(file >> ch))
{
// EOF
}
else
inc_col_or_row(last, ch);
it = lookup.find(ch);
}
t.kind = it->second;

return t;
}
private:
void inc_col_or_row(char last, char ch) noexcept
{
if(last == '\r' && ch == '\n')
{
// CRLF
// -> do not increase row again
}
else if(ch == '\r' || ch == '\n')
++row;
else
++col;
}
private:
std::map<char, token_kind> lookup;
std::fstream file;
std::size_t col;
std::size_t row;
};
```

If you are interested in more complex lexing strategies, for example how to differentiate between `123` and `a` for the string `123a`, look up "Maximum Munch Strategy".

# Emacs Config

## Emacs Configuration

### Emacs

#### Package initialization

```(require 'package)

(let** ((no-ssl (and (memq system-type '(windows-nt ms-dos))
(not (gnutls-available-p))))
(url (concat (if no-ssl "http" "https") "://melpa.org/packages/")))
(add-to-list 'package-archives (cons "melpa" url) t))
'("org" . "http://orgmode.org/elpa/"))

(package-initialize)
```

#### Screw scrollbars and similiar

I tend to dislike scrollbars, emacs has nice hotkey-based scrolling anyways, so why should I touch my mouse? Nuts.

```(scroll-bar-mode -1)
(tool-bar-mode nil)

(custom-set-variables '(inhibit-startup-screen 't))
```

#### Theme

```(require 'doom-themes)
(setq doom-themes-enable-bold t    ; if nil, bold is universally disabled
doom-themes-enable-italic t) ; if nil, italics is universally disabled
(doom-themes-visual-bell-config)
(doom-themes-org-config)

;;(custom-set-variables '(color-theme-directory nil))
```

This enables my favorite doom-theme, the visual bell and fixes the org mode fontification.

```(require 'rainbow-mode)

(define-globalized-minor-mode my-global-rainbow-mode rainbow-mode
(lambda () (rainbow-mode 1)))

(my-global-rainbow-mode 1)
```

#### Some infos to display

e.g. battery, time, column number, …

```(display-battery-mode t)
(display-time-mode 1)
(column-number-mode)
(size-indication-mode t)

(custom-set-variables '(battery-update-interval 30))
```

#### Keymappings

Swap meta and super.

```(setq  x-meta-keysym 'super
x-super-keysym 'meta)
```

Faster indentation than M-\ or whatever the real hotkey was.

```(define-key global-map (kbd "C-ü") 'indent-region)
```

I like to be able to jump to any kind of char visible in the buffer.

```(define-key global-map (kbd "C-c f") 'iy-go-to-char)
(define-key global-map (kbd "C-c g") 'iy-go-to-char-backward)
(define-key global-map (kbd "C-c SPC") 'ace-jump-mode)
```

#### Other

• No sleeping!

I really dislike this hotkey. Why would I want to pause my emacs? Ancient function for ancient machines.

```(global-unset-key (kbd "C-z"))
```
• Zone
```(require 'zone)
(zone-when-idle 180)
```

A little screensaver fun. ;)

• Timestamp
```(defun insert-timestamp ()
"Insert string for the current time formatted like '2:34 PM' or 1507121460"
(interactive)                 ; permit invocation in minibuffer
;;(insert (format-time-string "%D %-I:%M %p")))
;;(insert (format-time-string "%02y%02m%02d%02H%02M%02S")))
(insert (format-time-string "%F %H:%M:%S UTC+02:00")))
```
• Misc
```(custom-set-variables '(shell-input-autoexpand t)
'(history-length 1024))
(set-face-attribute 'default nil :height 100)
(ac-config-default)
(require 'wgrep)
```

### expand-region

```(require 'expand-region)
(define-key global-map (kbd "C-ö") 'er/expand-region)
```

### yasnippet

```(add-to-list 'load-path "~/.emacs.d/plugins/yasnippet")
(require 'yasnippet)
(yas-global-mode 1)
(setq yas-trigger-key "<tab>")
```

### org-mode

```(require 'org)
```

#### Display preferences

Fancy bullets instead of lists of asterisks are fancier.

```(add-hook 'org-mode-hook
(lambda() (org-bullets-mode t)))

```

Syntax highlighting in source blocks while editing

```(setq org-src-fontify-natively t)
```

#### Babel

```  (org-babel-do-load-languages
'((emacs-lisp . t)
(sh . t)
(python . t)
(latex . t)
(plantuml . t)))

(setq org-plantuml-jar-path (expand-file-name "/usr/share/plantuml/plantuml.jar"))
```

#### Keymappings

```(global-set-key (kbd "C-c o")
(lambda () (interactive)
(find-file "~/org/central.org")
(end-of-buffer)
))
(global-set-key (kbd "C-c c") 'org-capture)
(define-key global-map "\C-ca" 'org-agenda)
```

#### Agenda

Set agenda files location

```(setq org-agenda-files (list "/home/matthis/org"))
(setq org-agenda-file-regexp ".*\\.org")
```

#### Log

```(setq org-log-done t)
(setq org-log-repeat "time")
```

#### Publish

• Export
```(require 'ox-reveal)
(require 'ox-twbs)
```

Reveal allows me to create nice presentations, twbs is for bootstrap export.

• org-publish
```(setq org-publish-project-alist '(("emacs-conf"
:base-directory       "~/.emacs.d/"
:base-extension       "org"
:publishing-directory "/home/matthis/www/posts/"
:publishing-function org-publish-attachment
:section-numbers nil
:with-toc t
:html-preamble t)

;; ("images"
;;  :base-directory "~/images/"
;;  :base-extension "jpg\\|gif\\|png"
;;  :publishing-directory "/ssh:standard@217.160.183.244:~/html/images/"
;;  :publishing-function  org-html-publish-to-html
;;  :publishing-function org-publish-attachment)

;; ("other"
;;  :base-directory "~/other/"
;;  :base-extension "css\\|el"
;;  :publishing-directory "/ssh:user@host:~/html/other/"
;;  :publishing-function org-publish-attachment)

;;  ("website" :components ("orgfiles" "images" "other"))
))
```

This allows me to upload the stuff to my server with org-publish.

### web-mode

#### Filetypes

Add some filetypes where I want web-mode

```(require 'web-mode)

```

#### Indentation

```(setq web-mode-markup-indent-offset 2)
(setq web-mode-css-indent-offset 2)
(setq web-mode-code-indent-offset 2) ;; <- js, php, perl, ...
```

#### Highlight

Highlighting selected element and the column

```(setq web-mode-enable-current-element-highlight t)
(setq web-mode-enable-current-column-highlight t)
```

### c++-ish

#### Indentation

```(c-set-offset 'substatement-open 0)
(setq c-default-style "linux"
c-basic-offset 4)
(setq-default indent-tabs-mode nil
c-toggle-hungry-state t)
```

#### CEDET

```(load-file "~/cedet/cedet-devel-load.el")
(global-ede-mode t)
```
• Folding
```(add-to-list 'semantic-default-submodes 'global-semantic-tag-folding-mode t)
```
• Autocompletion
```(semantic-load-enable-code-helpers)
(global-srecode-minor-mode 1)

(semantic-mode 1)

(require 'semantic/bovine/gcc)

;; Intelli Sense like
(defun my-c-mode-cedet-hook ()
;; (local-set-key "." 'semantic-complete-self-insert)
;; (local-set-key ":" 'semantic-complete-self-insert)
;; (local-set-key ">" 'semantic-complete-self-insert))
```
• EDE
• Projects

#### Neotree

```(add-to-list 'load-path "/home/matthis/.emacs.d/plugins/neotree")
(require 'neotree)
(global-set-key [f12] 'neotree-toggle)
```

### Java

```(require 'eclim)
(setq eclimd-autostart t)

(defun my-java-mode-hook ()
(eclim-mode t)

(custom-set-variables
'(eclim-eclipse-dirs '("~/eclipse"))
'(eclim-executable "~/eclipse/eclim"))

(setq help-at-pt-display-when-idle t)
(setq help-at-pt-timer-delay 0.1)
(help-at-pt-set-timer)

(require 'ac-emacs-eclim)
(ac-emacs-eclim-config)

(require 'company)
(require 'company-emacs-eclim)
(company-emacs-eclim-setup)
(global-company-mode t)
```

#### Documentation lookup

```(eval-after-load 'haskell-mode
```

```(require 'hs-lint)

"hs-lint binding, plus autocompletion and paredit."
(local-set-key "\C-cl" 'hs-lint)
(setq ac-sources
(append '(ac-source-yasnippet
ac-source-abbrev
ac-source-words-in-buffer
ac-sources))
(intern (concat (symbol-name x)
"-mode-hook"))
'turn-on-paredit)))

'(progn
(require 'flymake)