6

So I am trying to calculate Feigenbaum's constant for the logistic map: $$ x_{n+1} = 4 \lambda x_n (1-x_n) $$ I am writing this through python and the main pieces I have for my code that are relevant are:

def logistic(L,x):
    return 4*L*x*(1-x)
n = 1000
iterations = 1000
keep = 500
L = np.linspace(0.4,1.0,n)
x = 1e-4 * np.ones(n)
for i in range(iterations):
    x = logistic(L,x)

    if i >= (iterations - keep):
        plt.plot(L,x,'.k', markersize=0.1)

I currently only have the packages numpy and matplotlib imported in my code.

So this gives me a nice bifurcation diagram but I was curious about how I would be able to calculate Feigenbaum's constant from this? I know I can go through it region by region and manually calculate the sections where I get period doubling but I would like to be able to do something more robust. I did read on Wikipedia that Feigenbaum's constant can be calculated from: $$ \frac{\lambda_{n-1} - \lambda_{n-2}}{\lambda_{n-1} - \lambda_{n}} $$ for the sections of period doubling. But again from the current code I have in place, I can only think of doing this through manual calculation. Any suggestions would be greatly appreciated.

Bernard
  • 179,256
Robert
  • 453
  • I am not sure whether there is a shortcut. I think, Wikipedia only mentions the Feigenbaum constants, I did not see a method to find them. – Peter Mar 17 '19 at 15:08

1 Answers1

1

It's a little bit tricky. Here's my try:

import pandas as pd, numpy as np
from random import random

seed = 0.6765432 n = 1000 # number of different rate values iters = 1000 # number of logistic map iterations acc = 8 # number accuracy

def logistic(L, x): return Lx(1-x)

def check(x): if x[1] == x[0]: return True else: return False

def feigenbaum(seed, acc, iters, n): '''computes Feigenbaum constant with acc accuracy, beggining from seed, iterate iters times the logistic function in different columns, for n different values of R'''

data = pd.DataFrame({'L': np.linspace(0.4, 4.0, n), 1: seed * 

np.ones(n)}) # create the R series and the seed column

for i in range(2, iters):
    data[i] = logistic(data['L'], data[i - 1])  # create the 

iterations in new columns

drop_cols = list(range(1, iters - 50))
data = data.drop(columns= drop_cols)            # keep the last 50 iterations
data = data.round(acc)                          # round to acc

points = data.melt(id_vars= 'L').groupby('L').nunique()     
# iterations in one column, then group by R, counting the unique values for each R
points['Fluctuation'] = points.value.rolling(2).apply(check)
# check for fluctuations (random occurences of values) by checking rows different from successors (one row error!)
points = points[points.Fluctuation == True]    # drop fluctuations

doubling = points[(points.value == 2) | (points.value == 4) | (points.value == 8)]
# select powers of 2 occurences
doubling = doubling.drop(columns= ['L', 'variable']).reset_index()
doubling = doubling.groupby('value').min()
# getting R value for every doubling

feigenbaum = (doubling.loc[4] - doubling.loc[2]) / (doubling.loc[8] - doubling.loc[4])

return round(feigenbaum[0], acc)


print(feigenbaum(random(), acc, iters, n))

I created this answer since I didn't find anything relevant online. I would appreciate any comments or further contribution!