web-dev-qa-db-ja.com

構文エラー、「... VariableDeclaratorId」を挿入してFormalParameterListを完成させます

私はこのコードでいくつかの問題に直面しています:

import edu.uci.ics.crawler4j.crawler.CrawlConfig;
import edu.uci.ics.crawler4j.crawler.CrawlController;
import edu.uci.ics.crawler4j.fetcher.PageFetcher;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtConfig;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtServer;

public class Controller {

     String crawlStorageFolder = "/data/crawl/root";
     int numberOfCrawlers = 7;

     CrawlConfig config = new CrawlConfig();
     config.setCrawlStorageFolder(crawlStorageFolder);
     /*
      * Instantiate the controller for this crawl.
      */
     PageFetcher pageFetcher = new PageFetcher(config);
     RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
     RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
     CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);

     /*
      * For each crawl, you need to add some seed urls. These are the first
      * URLs that are fetched and then the crawler starts following links
      * which are found in these pages
      */
     controller.addSeed("http://www.ics.uci.edu/~lopes/");
     controller.addSeed("http://www.ics.uci.edu/~welling/");
     controller.addSeed("http://www.ics.uci.edu/");
     /*
      * Start the crawl. This is a blocking operation, meaning that your code
      * will reach the line after this only when crawling is finished.
      */
     controller.start(MyCrawler.class, numberOfCrawlers);
 }

次のエラーが発生します:

「構文エラー、config.setCrawlStrorageFolder(crawlStorageFolder)の「... VariableDeclaratorId」を挿入してFormalParameterListを完了します」

4
Dinesh Purty

そのような任意のコードをクラス本体に直接含めることはできません。 メソッド(またはコンストラクター、または初期化ブロック)に含まれている必要があります。

3
JB Nizet

コードはクラス本体にあります。実行するmainメソッドに入れます。

   import edu.uci.ics.crawler4j.crawler.CrawlConfig;
    import edu.uci.ics.crawler4j.crawler.CrawlController;
    import edu.uci.ics.crawler4j.fetcher.PageFetcher;
    import edu.uci.ics.crawler4j.robotstxt.RobotstxtConfig;
    import edu.uci.ics.crawler4j.robotstxt.RobotstxtServer;

    public class Controller {
    public static void main(String[] args){

         String crawlStorageFolder = "/data/crawl/root";
         int numberOfCrawlers = 7;

         CrawlConfig config = new CrawlConfig();
         config.setCrawlStorageFolder(crawlStorageFolder);
         /*
          * Instantiate the controller for this crawl.
          */
         PageFetcher pageFetcher = new PageFetcher(config);
         RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
         RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
         CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);

         /*
          * For each crawl, you need to add some seed urls. These are the first
          * URLs that are fetched and then the crawler starts following links
          * which are found in these pages
          */
         controller.addSeed("http://www.ics.uci.edu/~lopes/");
         controller.addSeed("http://www.ics.uci.edu/~welling/");
         controller.addSeed("http://www.ics.uci.edu/");
         /*
          * Start the crawl. This is a blocking operation, meaning that your code
          * will reach the line after this only when crawling is finished.
          */
         controller.start(MyCrawler.class, numberOfCrawlers);
     }
    }
0
Rushdi Shams