Author Archives: ChristianKreuzberger

I have created a special page with a Monero JavaScript Miner using CoinHive.

You can start the miner if you visit this page (and this page only): https://chkr.at/miner/

Note: Monero is something like Bitcoin, except for that it can be mined in a browser. I am using this as a way to allow people to say thank you.

If you want to learn more about mining Monero with coin-hive, I would like to direct you to this YouTube Video (not made by me):

 

ForeignKeys need to have the on_delete Attribute set (e.g., to models.CASCADE for a cascading delete)

This also affects existing migrations. If you have migrations that you created with Django 1.8, you will run into errors (as they do not have that attribute set in the migration).

Also see this Ticket: https://code.djangoproject.com/ticket/28677

Apps should specify the "app_name" attribute in their urls.py

If you don't do this, you might run into an error when you include that urls.py in another urls.py

Also see this Ticket: https://code.djangoproject.com/ticket/28691

SessionAuthenticationMiddleware is no longer available

If you have SessionAuthenticationMiddleware MIDDLEWARE listed (you most likely do if you are upgrading from an older Django Version), you will have to remove it from your middleware list (or tuple).

user.is_authenticated() and user.is_anonymous() are no longer available as functions

They are now properties and have to be called without the function parantheses!

More Info

 

 

Ever wondered why a certain python package was installed?

E.g., when you are installing WeasyPrint you will find that it installs a lot of other libraries, such as cffi, cariocffi and html5lib. With pipdeptree you can visualize this 🙂

pip install pipdeptree

pipdeptree

WeasyPrint==0.40
  - cairocffi [required: >=0.5, installed: 0.8.0]
    - cffi [required: >=1.1.0, installed: 1.11.0]
      - pycparser [required: Any, installed: 2.18]
  - CairoSVG [required: >=1.0.20, installed: 2.0.3]
    - cairocffi [required: Any, installed: 0.8.0]
      - cffi [required: >=1.1.0, installed: 1.11.0]
        - pycparser [required: Any, installed: 2.18]
    - cssselect [required: Any, installed: 1.0.1]
    - lxml [required: Any, installed: 3.8.0]
    - pillow [required: Any, installed: 4.2.1]
      - olefile [required: Any, installed: 0.44]
    - tinycss [required: Any, installed: 0.4]
  - cffi [required: >=0.6, installed: 1.11.0]
    - pycparser [required: Any, installed: 2.18]
  - cssselect2 [required: >=0.1, installed: 0.2.0]
    - tinycss2 [required: Any, installed: 0.6.0]
      - webencodings [required: >=0.4, installed: 0.5.1]
  - html5lib [required: >=0.999999999, installed: 0.999999999]
    - setuptools [required: >=18.5, installed: 36.5.0]
    - six [required: Any, installed: 1.11.0]
    - webencodings [required: Any, installed: 0.5.1]
  - Pyphen [required: >=0.8, installed: 0.9.4]
  - tinycss2 [required: >=0.5, installed: 0.6.0]
    - webencodings [required: >=0.4, installed: 0.5.1]

Remember those days when you just did something like

pip install numpy
pip install matplotlib

and wrote python code in some file called calculate_and_plot.py and your (data science) project just got some nice plots?

This was probably before you ever heard about Python virtual environments. And even if you did hear about it, you probably said to yourself: Why would I add another layer of complexity? I don't need that for now, It's just a little project.

"I can handle my python libraries just fine without introducing more complexity!"

Well, let me tell you this: You are both right and wrong. If your goal is just doing a little project that you will use once and then forget about it, then you really don't need a virtual environment. However, this does not mean that you shouldn't use it! You will end up having to re-visit your code at some point in time, and then you are going to ask yourself the following two questions:

  • What was this library called I used to do XYZ? (you probably wrote that down in a README anyway, right?
  • What version of said library did I use? Was it 3.1? 5.7? 1.0? 0.9rc1? Oh my god there are so many different versions!?!

Both questions are only symptoms from a problem with how Python libraries are usually managed. Most operating systems (Windows aswell as Linux) will install your Python libraries (such as numpy, matplotlib, Django, ...) into your OS Python lib-packages directory (that's also why you are usually required to do this with Admin rights or sudo).

"But virtual environments are so complex, and I really need to finish this project on time, so ..."

Let me give you a quick introduction and you will see that they are not complex at all. Also, about the time component: Not using a virtual environment could be one of these things that you might regret later (e.g., when you give your Python code to a colleague).

What are Python Virtual Environments?

Actually, the name is kind of misleading. "Virtual" usually implies that there is some kind of virtualization going on. This is not the case. It's really just a set of symbolic links (e.g., for the python binary) and directories that contain your python libraries.

What it really does is modifying your local environment variables and it tells the shell where to find the python interpreter and python libraries.

How do I create a Python Virtual Environment?

IMHO the best and simplest way to create and manage your Virtual Environments, or "venvs" is to do it in your local project folder. Assuming you have the following project:

  • research_paper_876/
    • statistics.py
    • data/
      • run1.csv
      • run2.csv
      • run3.csv
    • plots/
      • run1.png
      • run2.png
      • run3.png

Then you would create your virtual environment within the folder research_paper_876 like this:

cd research_paper_876
virtualenv -p python3 venv

This will create a folder called venv in your research_paper_876 directory. Note: If you are using git, svn or any other versioning system, I recommend adding an exception for the venv directory. DO NOT ADD THE VENV DIRECTORY TO YOUR VERSIONING CONTROL SYSTEM!

Your directory structure will now look like this:

research_paper_876/

  • statistics.py
  • data/
    • run1.csv
    • run2.csv
    • run3.csv
  • plots/
    • run1.png
    • run2.png
    • run3.png
  • venv/
    • bin/
      • python (symbolic link to your python installation)
      • pip (symbolic link to pip)
      • ...
    • include/
    • lib/
      • python3.*/
        • site-packages/
          • ...
        • ...

Okay, next step: Activate your venv!

This is done with the following shell command:

source venv/bin/activate

Often you will find that your shell shows you that you have activated a virtual environment by adding a prefix, e.g.:

ckreuzberger@localhost:~/research_paper_876$ source venv/bin/activate
(venv)ckreuzberger@localhost:~/research_paper_876$

Now that you have activated your venv, you can install the desired libraries (e.g., numpy and matplotlib).

pip install numpy matplotlib

This will install these libraries and all required dependencies into your venv/lib/python3.*/site-packages/ folder.

If you now run your python code (e.g., python statistics.py) within your venv, only the libraries installed in your venv will be used.

Two more things you need to know:

First: Create a file called requirements.txt in your projects main directory by using the following command:

pip freeze > requirements.txt

This will fill your requirements.txt with a set of libraries and versions. When I wrote this tutorial it looked like this:

numpy==1.13.1
matplotlib==2.0.2

Second: If you finished working with your project, you should deactivate your venv by running the following command:

deactivate

How to re-create the same environment later

If you give your project to a colleague, or publish it on github, etc..., you would supply your code and the requirements.txt file. Your colleague can then create the exact same python virtual environment by executing the following commands:

virtualenv -p python3 venv
source venv/bin/activate
pip install -r requirements.txt

This is something you could (and should) write into a README file, so you and potentially others don't forget about it later.

 

Where can I read more about this?

I recommend reading the official docs on python.org about virtual environments: https://docs.python.org/3/tutorial/venv.html

 

One major difference when using Windows vs. Linux is that Windows "prefers" \r\n (CRLF) for newlines. This has been a known fact for years, and most tools will automatically handle this  (e.g., Filezilla). However, docker on Windows does not handle this, which will cause massive issues that are literally impossible to debug. For instance, you might get this error:

standard_init_linux.go:175: exec user process caused "no such file or directory"

This issue is really easy to fix: Convert your files from \r\n (CRLF) to just \n (LF). Most editors/IDEs on Windows can do that for you (including Notepad++). However, this will not be a permanent solution. If you are using GIT, you need to configure either your repository or your global GIT configuration to prefer \n over \r\n:

https://help.github.com/articles/dealing-with-line-endings/

Unless you really need to keep your \r\n, I recommend to change this globally with

git config --global core.autocrlf input

 

I needed a quick way of converting videos to gifs on Linux -ffmpeg and this post on stackexchange to the rescue!

The result is this neat little bash script:

#!/bin/bash

if [[ $# -eq 0 ]] ; then
 echo "Usage: videoToGif inputFile [outputFile] [FPS] [WIDTH]"
 echo "Example: videoToGif inputFile.ext outputFile.gif 60 360"
 exit 0
fi

inputFile=$1
outputFile=${2:-output.gif}

FPS=${3:-30}
WIDTH=${4:-360}

#Generate palette for better quality
ffmpeg -i $inputFile -vf fps=$FPS,scale=$WIDTH:-1:flags=lanczos,palettegen tmp_palette.png

#Generate gif using palette
ffmpeg -i $inputFile -i tmp_palette.png -loop 0 -filter_complex "fps=$FPS,scale=$WIDTH:-1:flags=lanczos[x];[x][1:v]paletteuse" $outputFile

rm tmp_palette.png

I used it to convert a Screencast created with "recordMyDesktop" (from OGV format) to a GIF.

During my recent journeys I discovered AngularJS, a JavaScript framework dedicated for writing Single Page Applications (SPA). One of the key concepts of Angular is the so called digest cycle. Everytime a user clicks a button, a timer caused by $timeout or $interval is fired, a $http request finishes or a $promise is resolved (or $q.defer is resolved), a digest cycle is executed.

This digest cycles causes the whole page to re-evaluate. All watchers would automatically check whether or not the variable (or function) they are watching have changed (in fact, this check is executed twice).

Now in some of my AngularJS Projects I had to implement a ticking clock, showing Hours, Minutes and Seconds. The most trivial way implementing this in Angular would be something like this:

$scope.updateClock = function() {
    $scope.curDate = new Date();
}
$interval($scope.updateClock, 1000); // update once per second
<p>{{ $scope.curDate | date:'hh:mm:ss' }}</p>

These lines of code are probably found in many AngularJS projects across the world. But what if I told you that this is in fact not a good solution? Here is why: $interval will cause a digest cycle! In this very simple example, it will cause one digest cycle per second. If you have some complex computation or many watchers, this could significantly slow down your SPA and lead to users abandoning your page. In addition, mobile devices will drain their battery much faster. The main question however is: Why would you want your whole application to evaluate again, when you only want to refresh time on your clock?

To fix this problem, we can help ourselfs by working  the concepts that JavaScript and HTML5 provide, and therefore make clocks  great again! To overcome the issue of causing unneccessary digest cycles we need to avoid using the concept of watchers and data binding of AngularJS. The experienced AngularJS programmer will already know what is coming next...

Make DOM manipulation great again

But but but... You aren't supposed to do that in Angular!

Right! You aren't supposed to do that in your controllers. However, you are allowed to do that in directives. Essentially, I am going to show you how to build an Angular directive which uses JavaScripts own setInterval (Note: You could use $interval and invokeApply=false) to modify a DOM element, displaying the current time.

    angular.module('angular-ticking-clock', []).directive('tickingClock', ['$filter', function($filter) {
        return {
            restrict: 'E',
            link: function(scope, element, attrs) {
                var updateTimer = undefined;

                var updateDateTime = function() {
                    element.text($filter('date')(new Date(), attrs.dateTimeFormat));
                };
                
                updateTimer = setInterval(updateDateTime, attrs.updateInterval);
      
                /**
                 * On Destroy of this directive, we need to cancel the timer
                */
                scope.$on(
                    "$destroy",
                    function( event ) {
                        if (updateTimer)
                            clearInterval(updateTimer);
                    }
                );

            }
        };
    }]);

The most improtant part of this is the $scope.$on("$destroy", ...)! Whenever this directive is destroyed, we need to clear the interval timer, such that it no longer fires. The second most important part is that this directive should always be used as an element (restrict: 'E'), ensuring that it has its very own DOM element to modify.

Other then that, that's it. Feel free to use this code as you like. I also created a github Repo and an NPM package for it.

from django.contrib.auth.models import User
from faker import Faker
fake = Faker()

for i in range(0,200):
    name = fake.name()
    first_name = name.split(' ')[0]
    last_name = ' '.join(name.split(' ')[-1:])
    username = first_name[0].lower() + last_name.lower().replace(' ', '')
    user = User.objects.create_user(username, password=username)
    user.first_name = first_name
    user.last_name = last_name
    user.is_superuser = False
    user.is_staff = False
    user.email = username + "@" + last_name.lower() + ".com"
    user.save()

While people on the Internet are still fighting about npm vs bower, there are some packages that are only available for npm and some that are only available for bower. Unfortunately you will run into problems, sooner or later, just like I did today.

The package pdfmake enables JavaScript applications to convert text to PDF, both within a website as well as a NodeJS server application. However, they explicitly state that the bower version should be used for web applications, and the npm version for server applications.

But if you want to use npm and want to avoid bower (for whatever reason), then you will run into a problem.
Thankfully, somebody created a wrapper package for npm: https://github.com/AaronBuxbaum/pdfmake-client

You can install it via

npm install pdfmake-client --save