Your error traceback shows that your Task Scheduler is running your. Looks like Anaconda may be using a 64-bit python. re._exceptions.MemoryError: Unable to allocate array with Line 3323, in _merge_blocks new_values = new_values Line 1913, in _consolidate list(group_blocks), dtype=dtype,įile "C:\Users\decisionsupport\AppData\Local\Programs\Python\Python37-32\lib\site-packages\pandas\core\internals\blocks.py", Line 937, in _consolidate_inplace self.blocks = Line 932, in consolidate bm._consolidate_inplace() "C:\Users\decisionsupport\AppData\Local\Programs\Python\Python37-32\lib\site-packages\pandas\core\internals\managers.py", Line 5250, in f self._data = self._nsolidate() Ill check whether the problem occurs in a standalone package. Line 5241, in _protect_consolidate result = f() I met a similar problem recently via deploying with anaconda. Line 5252, in _consolidate_inplace self._protect_consolidate(f) Line 363, in _init_ obj._consolidate_inplace() Line 2522, in groupby return klass(obj, by, **kwds) "C:\Users\decisionsupport\AppData\Local\Programs\Python\Python37-32\lib\site-packages\pandas\core\groupby\groupby.py", Line 10440, in _agg_by_level grouped = oupby(level=level, Line 11616, in stat_func return self._agg_by_level(name, axis=axis, "C:\Users\decisionsupport\AppData\Local\Programs\Python\Python37-32\lib\site-packages\pandas\core\generic.py", Im currently using Spyder 2.3. RUN PSYCHOPY ON ANACONDA INSTALLTraceback (most recent call last): File "", line 1, in Now open Spyder and you should be able to run the following small script. For the easiest installation download and install the Standalone package. when i select a subset of the dataset it runs fine but when I run the whole dataset i get this error > df =pd.concat(. I got this error, which I did not get when I was running through Spyder I start tracing the problem by running the script on Python from within cmd RUN PSYCHOPY ON ANACONDA CODETherefore I am want to execute the code in cmd Command Prompt as Python "c:\myfolder\predservice.py" I have code that runs over 7000 columns dataset with around 250k recordsīut now I want to put this code to run based on Task Schedule
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |