For some reason I'm struggling to initialize a numpy.chararray with spaces. This works: char_array1 = np.chararray((3, 3)) char_array1[:] = 'a' char_array1 Output: chararray([['a', 'a', 'a'], ['a', 'a', 'a'], ['a', 'a', 'a']], dtype='|S1') This doesn

I have a JSON string which contains a dictionary mapping index to float values. This is representative of a vector. For example, { 'distro': {0: 2.42, 3: 2.56}, 'constant': 4.55 'size': 10000 } represents a vector of size 10000 having 2.42 on index 0

when I use np.stack, sometimes have to use axis, like axis=1. I don't understand what the axis means for it. for exmaple, c1 = np.ones((2, 3)) c2 = np.zeros((2, 3)) c = np.stack([c1, c2], axis = 1) this shows like, array([[[1., 1., 1.], [0., 0., 0.]]

I have a python function that employs the numpy package. It uses numpy.sort and numpy.array functions as shown below: def function(group): pre_data = np.sort(np.array( [c["data"] for c in group[1]], dtype = np.float64 )) How can I re-write the s

I have the following problem. I want to create a numpy-matrix of size 2^L x L+2. In the first column are variables, which I define later in the program. In the last L columns should be all possibilities to distribute zeroes and ones (In my opinion bi

I'm currently using python 2.7.1 with some packages as shows below In [4]: scipy.__version__ Out[4]: '0.17.0' In [5]: numpy.__version__ Out[5]: '1.10.4' In [6]: skimage.__version__ Out[6]: '0.12.3' Looking into the What's new page for python 3.5 i co

I was trying to do linear algebra numerical computation in C++. I used Python Numpy for quick model and I would like to find a C++ linear algebra pack for some further speed up. Eigen seems to be quite a good point to start. I wrote a small performan

Is it possible to apply a numpy function based on a string ? If I give 'max' call np.max. values = np.array([[1,2,-1],[2,3,6], [0,-1,4]]) aggregator = 'max' print np.max(values, axis=0) >>> [2 3 6] What I hope is something like this : some_cool_f

I am trying to concat the following: df1 price side timestamp timestamp 2016-01-04 00:01:15.631331072 0.7286 2 1451865675631331 2016-01-04 00:01:15.631399936 0.7286 2 1451865675631400 2016-01-04 00:01:15.631860992 0.7286 2 1451865675631861 2016-01-04

There should be a way to turn a lists like this: a = [[1], [2], [3], [4], [5]] b = [[6], [7], [8], [9], [10]] to something like this: c = [[1, 6], [2, 7], [3, 8], [4, 9], [5, 10]] Right now I'm accomplishing this using for loops. c = [] for pos in ra

For each element in a randomized array of 2D indices (with potential duplicates), I want to "+=1" to the corresponding grid in a 2D zero array. However, I don't know how to optimize the computation. Using the standard for loop, as shown here, de

I have the following code x = -10 for i in range(2,10): print i, " | ",np.exp(-x**i) with the following output: 2 | 3.72007597602e-44 3 | inf 4 | 0.0 5 | inf 6 | 0.0 7 | inf 8 | 0.0 9 | inf Why is the results ~0 for i even and Inf for i odd?Sinc

I want to create a Pandas DataFrame filled with NaNs. During my research I found an answer: import pandas as pd df = pd.DataFrame(index=range(0,4),columns=['A']) This code results in a DataFrame filled with NaNs of type "object". So they cannot

How best to write a function that can accept either scalar floats or numpy vectors (1-d array), and return a scalar, 1-d array, or 2-d array, depending on the input? The function is expensive and is called often, and I don't want to place a burden on

Say I have an arbitrary numpy matrix that looks like this: arr = [[ 6.0 12.0 1.0] [ 7.0 9.0 1.0] [ 8.0 7.0 1.0] [ 4.0 3.0 2.0] [ 6.0 1.0 2.0] [ 2.0 5.0 2.0] [ 9.0 4.0 3.0] [ 2.0 1.0 4.0] [ 8.0 4.0 4.0] [ 3.0 5.0 4.0]] What would be an efficient way o

I'm doing simulations for scientific computing, and I'm almost always going to want to be in the interactive interpreter to poke around at the output of my simulations. I'm trying to write classes to define simulated objects (neural populations) and

Say I have an ordered array/list like this one: a = [0.2, 0.35, 0.88, 1.2, 1.33, 1.87, 2.64, 2.71, 3.02] I want to find the largest difference between adjacent elements efficiently. In this case it would be (2.64 - 1.87) = 0.77. I could use a for loo

I have in the order or 10^5 binary files which I read one by one in a for loop with numpy's fromfile and plot with pyplot's imshow. Each file takes about a minute to read and plot. Is there a way to speed things up? Here is some pseudo code to explai

I have a booleen numpy array as follows: bool_arr = array([[ True, True, True, True], [False, False, True, True], [False, False, False, True]], dtype=bool) I want to compare, along the rows, returning True only for the first instance of True, otherwi

I am trying to recreate some of the work from the blog posting http://sarvamblog.blogspot.com/2013/04/clustering-malware-corpus.html import itertools import glob import numpy,scipy, os, array from scipy.misc import imsave for filename in list(glob.gl

I need to calculate the area where two functions overlap. I use normal distributions in this particular simplified example, but I need a more general procedure that adapts to other functions too. See image below to get an idea of what I mean, where t

I have a loop which generates an array from a text file. Every time it passes through the loop I want it to add the new array to the old one but I'm not sure how to do this. For example: loop=np.arange(1,50) for arg in loop: str(arg) a=np.genfromtxt(

This question already has an answer here: Python - How can I find the square matrix of a lower triangular numpy matrix? (with a symmetrical upper triangle) 2 answers I generated a lower triangular matrix, and I want to complete the matrix using the v

Given a matrix QT: % ipython Python 2.7.3 In [3]: QT.dtype Out[3]: dtype('float64') In [4]: QT.__class__ Out[4]: numpy.ndarray In [5]: QT.flags Out[5]: C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCO

I just discovered - by chance - that an array in numpy may be indexed by an empty tuple: In [62]: a = arange(5) In [63]: a[()] Out[63]: array([0, 1, 2, 3, 4]) I found some documentation on the numpy wiki ZeroRankArray: (Sasha) First, whatever choice

I have the following range of numpy data (deltas of usec timestamps): array([ 4.312, 4.317, 4.316, 4.32 , 4.316, 4.316, 4.319, 4.317, 4.317, 4.316, 4.318, 4.316, 4.318, 4.316, 4.318, 4.317, 4.317, 4.317, 4.316, 4.317, 4.318, 4.316, 4.318, 4.316, 4.31

What is the difference between an iterable and an array_like object in Python programs which use Numpy? Both iterable and array_like are often seen in Python documentation and they share some similar properties. I understand that in this context an a

Hey guys I'd like to know the best fatest/optimized way of getting the maximum values element-wised of "n" matrices in Python/Numpy. For example: import numpy as np matrices=[np.random.random((5,5)) for i in range(10)] # the function np.maximum

I am trying to create a program that reduces any given matrix to reduced row echelon form. What I'm trying to is to divide each entry in the row by the leading number. For example, say I have: [ [3, 4, 5], [ 1, 2, 3] ] # a 2-d array. which gives: [ 3

I am working at some plots and statistics for work and I am not sure how I can do some statistics using numpy: I have a list of prices and another one of basePrices. And I want to know how many prices are with X percent above basePrice, how many are