spyCyとGiNZAがインストールできないエラーの解決方法

前提

Pythonで学ぶテキストマイニング入門(2022; シーアンドアール研究所)で載っているcodeでMecabを使えるように練習している者です。
spyCyとGiNZAをPythonで実行できるようにする作業で、エラーが発生してしまいました。

実現したいこと

エラーの改善

発生している問題・エラーメッセージ

--------------------------------------------------------------------------- ValueError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_6896\3693895956.py in <module> 1 import spacy ----> 2 nlp = spacy.load("ja_ginza") ~\anaconda3\lib\site-packages\spacy\__init__.py in load(name, vocab, disable, enable, exclude, config) 52 RETURNS (Language): The loaded nlp object. 53 """ ---> 54 return util.load_model( 55 name, 56 vocab=vocab, ~\anaconda3\lib\site-packages\spacy\util.py in load_model(name, vocab, disable, enable, exclude, config) 430 return get_lang_class(name.replace("blank:", ""))() 431 if is_package(name): # installed as package --> 432 return load_model_from_package(name, **kwargs) # type: ignore[arg-type] 433 if Path(name).exists(): # path to model data directory 434 return load_model_from_path(Path(name), **kwargs) # type: ignore[arg-type] ~\anaconda3\lib\site-packages\spacy\util.py in load_model_from_package(name, vocab, disable, enable, exclude, config) 466 """ 467 cls = importlib.import_module(name) --> 468 return cls.load(vocab=vocab, disable=disable, enable=enable, exclude=exclude, config=config) # type: ignore[attr-defined] 469 470 ~\anaconda3\lib\site-packages\ja_ginza\__init__.py in load(**overrides) 8 9 def load(**overrides): ---> 10 return load_model_from_init_py(__file__, **overrides) ~\anaconda3\lib\site-packages\spacy\util.py in load_model_from_init_py(init_file, vocab, disable, enable, exclude, config) 647 if not model_path.exists(): 648 raise IOError(Errors.E052.format(path=data_path)) --> 649 return load_model_from_path( 650 data_path, 651 vocab=vocab, ~\anaconda3\lib\site-packages\spacy\util.py in load_model_from_path(model_path, meta, vocab, disable, enable, exclude, config) 504 overrides = dict_to_dot(config) 505 config = load_config(config_path, overrides=overrides) --> 506 nlp = load_model_from_config( 507 config, 508 vocab=vocab, ~\anaconda3\lib\site-packages\spacy\util.py in load_model_from_config(config, meta, vocab, disable, enable, exclude, auto_fill, validate) 552 # registry, including custom subclasses provided via entry points 553 lang_cls = get_lang_class(nlp_config["lang"]) --> 554 nlp = lang_cls.from_config( 555 config, 556 vocab=vocab, ~\anaconda3\lib\site-packages\spacy\language.py in from_config(cls, config, vocab, disable, enable, exclude, meta, auto_fill, validate) 1816 # The pipe name (key in the config) here is the unique name 1817 # of the component, not necessarily the factory -> 1818 nlp.add_pipe( 1819 factory, 1820 name=pipe_name, ~\anaconda3\lib\site-packages\spacy\language.py in add_pipe(self, factory_name, name, before, after, first, last, source, config, raw_config, validate) 799 lang_code=self.lang, 800 ) --> 801 pipe_component = self.create_pipe( 802 factory_name, 803 name=name, ~\anaconda3\lib\site-packages\spacy\language.py in create_pipe(self, factory_name, name, config, raw_config, validate) 659 lang_code=self.lang, 660 ) --> 661 raise ValueError(err) 662 pipe_meta = self.get_factory_meta(factory_name) 663 # This is unideal, but the alternative would mean you always need to ValueError: [E002] Can't find factory for 'compound_splitter' for language Japanese (ja). This usually happens when spaCy calls `nlp.create_pipe` with a custom component name that's not registered on the current language class. If you're using a Transformer, make sure to install 'spacy-transformers'. If you're using a custom component, make sure you've added the decorator `@Language.component` (for function components) or `@Language.factory` (for class components). Available factories: attribute_ruler, tok2vec, merge_noun_chunks, merge_entities, merge_subtokens, token_splitter, doc_cleaner, parser, beam_parser, lemmatizer, trainable_lemmatizer, entity_linker, ner, beam_ner, entity_ruler, tagger, morphologizer, senter, sentencizer, textcat, spancat, future_entity_ruler, span_ruler, textcat_multilabel, ja.morphologizer

該当のソースコード

pip install -U ginza pip install -U ginza ja-ginza pip install -U spacy pip install transformers -U
import spacy nlp = spacy.load("ja_ginza")

試したこと

pip install -U ginza ja-ginza-electricaの実行
pip install -U ginza ja-ginzaの実行
pip install transformers -Uの実行

補足情報(FW/ツールのバージョンなど)

Windows10
Mecab 64bit
Python (jupyter notebook(anaconda))
です。

コメントを投稿

0 コメント