![]() ![]() My guess is that it has to do something with the np.dot function, but I don't understand why this would work differently than doing each dot product independently. When inspecting the methods as they are running it seems like the fast method creates a very good guess on its first iteration. The faster Gauss Jacobi implementation is not only significantly faster than every other implementation, but it does not seem to increase with array size like the other methods. ![]() ![]() N Gauss-Jacobi Gauss-Jacobi Fast Gauss Seidel SOR - w=1.5 I ran randomized tests on 100 NxN Diagonally dominant matrices for each N = 4.20 to get an average number of iterations until convergence. Both of these were implemented in a similar way to my original, slow Gauss-Jacobi method. I've implemented two other methods, the Gauss-Seidel Method and the SOR method. What makes the second implementation so much faster than the first? The first implementation takes 37 iterations to converge with an error of 1e-8 while the second implementation takes only 7 iterations to converge. Let's apply the Gauss-Seidel Method to the system from Example 1. It is described by This can also be written. def GaussJacobi(A, b, x, x_solution, tol): This technique is called the Gauss-Seidel Method - even though, as noted by Gil Strang in his Introduction to Applied Mathematics, Gauss didn’t know about it and Seidel didn’t recommend it. The second implementation is based on this article. ![]() X_new = np.zeros(N, dtype=np.double) #x(k+1) The first implementation is what I originally came up with import numpy as npĭef GaussJacobi(A, b, x, x_solution, tol): When implementing the Gauss Jacobi algorithm in python I found that two different implementations take a significantly different number of iterations to converge. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |